00:00:00.000 Started by upstream project "autotest-per-patch" build number 131204 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.080 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.191 > git --version # 'git version 2.39.2' 00:00:00.191 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.574 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.585 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.597 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:04.597 > git config core.sparsecheckout # timeout=10 00:00:04.611 > git read-tree -mu HEAD # timeout=10 00:00:04.627 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:04.646 Commit message: "packer: Bump java's version" 00:00:04.646 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:04.738 [Pipeline] Start of Pipeline 00:00:04.795 [Pipeline] library 00:00:04.798 Loading library shm_lib@master 00:00:04.798 Library shm_lib@master is cached. Copying from home. 00:00:04.812 [Pipeline] node 00:00:04.827 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.829 [Pipeline] { 00:00:04.838 [Pipeline] catchError 00:00:04.839 [Pipeline] { 00:00:04.852 [Pipeline] wrap 00:00:04.860 [Pipeline] { 00:00:04.869 [Pipeline] stage 00:00:04.872 [Pipeline] { (Prologue) 00:00:05.058 [Pipeline] sh 00:00:05.341 + logger -p user.info -t JENKINS-CI 00:00:05.357 [Pipeline] echo 00:00:05.359 Node: WFP6 00:00:05.364 [Pipeline] sh 00:00:05.656 [Pipeline] setCustomBuildProperty 00:00:05.667 [Pipeline] echo 00:00:05.668 Cleanup processes 00:00:05.672 [Pipeline] sh 00:00:05.954 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.954 941616 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.965 [Pipeline] sh 00:00:06.249 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.249 ++ grep -v 'sudo pgrep' 00:00:06.249 ++ awk '{print $1}' 00:00:06.249 + sudo kill -9 00:00:06.249 + true 00:00:06.263 [Pipeline] cleanWs 00:00:06.273 [WS-CLEANUP] Deleting project workspace... 00:00:06.273 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.279 [WS-CLEANUP] done 00:00:06.283 [Pipeline] setCustomBuildProperty 00:00:06.294 [Pipeline] sh 00:00:06.571 + sudo git config --global --replace-all safe.directory '*' 00:00:06.649 [Pipeline] httpRequest 00:00:07.369 [Pipeline] echo 00:00:07.371 Sorcerer 10.211.164.101 is alive 00:00:07.380 [Pipeline] retry 00:00:07.382 [Pipeline] { 00:00:07.392 [Pipeline] httpRequest 00:00:07.395 HttpMethod: GET 00:00:07.395 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:07.396 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:07.412 Response Code: HTTP/1.1 200 OK 00:00:07.413 Success: Status code 200 is in the accepted range: 200,404 00:00:07.413 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:11.510 [Pipeline] } 00:00:11.527 [Pipeline] // retry 00:00:11.534 [Pipeline] sh 00:00:11.814 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:11.831 [Pipeline] httpRequest 00:00:12.242 [Pipeline] echo 00:00:12.244 Sorcerer 10.211.164.101 is alive 00:00:12.254 [Pipeline] retry 00:00:12.257 [Pipeline] { 00:00:12.272 [Pipeline] httpRequest 00:00:12.277 HttpMethod: GET 00:00:12.277 URL: http://10.211.164.101/packages/spdk_96764f31c093ed2f52ac57c2f5999501dd1e1e2c.tar.gz 00:00:12.278 Sending request to url: http://10.211.164.101/packages/spdk_96764f31c093ed2f52ac57c2f5999501dd1e1e2c.tar.gz 00:00:12.290 Response Code: HTTP/1.1 200 OK 00:00:12.290 Success: Status code 200 is in the accepted range: 200,404 00:00:12.291 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_96764f31c093ed2f52ac57c2f5999501dd1e1e2c.tar.gz 00:01:13.407 [Pipeline] } 00:01:13.424 [Pipeline] // retry 00:01:13.431 [Pipeline] sh 00:01:13.720 + tar --no-same-owner -xf spdk_96764f31c093ed2f52ac57c2f5999501dd1e1e2c.tar.gz 00:01:16.272 [Pipeline] sh 00:01:16.557 + git -C spdk log --oneline -n5 00:01:16.557 96764f31c nvme: Add transport interface to enable interrupts 00:01:16.557 874e4a5f6 env_dpdk: new interfaces for pci device multi interrupt 00:01:16.557 dc97cf33b env_dpdk: add required APIs to handle interrupt 00:01:16.557 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:01:16.557 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:01:16.568 [Pipeline] } 00:01:16.583 [Pipeline] // stage 00:01:16.591 [Pipeline] stage 00:01:16.593 [Pipeline] { (Prepare) 00:01:16.609 [Pipeline] writeFile 00:01:16.624 [Pipeline] sh 00:01:16.980 + logger -p user.info -t JENKINS-CI 00:01:16.996 [Pipeline] sh 00:01:17.279 + logger -p user.info -t JENKINS-CI 00:01:17.290 [Pipeline] sh 00:01:17.576 + cat autorun-spdk.conf 00:01:17.576 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.576 SPDK_TEST_NVMF=1 00:01:17.576 SPDK_TEST_NVME_CLI=1 00:01:17.576 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.576 SPDK_TEST_NVMF_NICS=e810 00:01:17.576 SPDK_TEST_VFIOUSER=1 00:01:17.576 SPDK_RUN_UBSAN=1 00:01:17.576 NET_TYPE=phy 00:01:17.583 RUN_NIGHTLY=0 00:01:17.587 [Pipeline] readFile 00:01:17.608 [Pipeline] withEnv 00:01:17.610 [Pipeline] { 00:01:17.622 [Pipeline] sh 00:01:17.906 + set -ex 00:01:17.906 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.906 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.906 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.906 ++ SPDK_TEST_NVMF=1 00:01:17.906 ++ SPDK_TEST_NVME_CLI=1 00:01:17.906 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.906 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.906 ++ SPDK_TEST_VFIOUSER=1 00:01:17.906 ++ SPDK_RUN_UBSAN=1 00:01:17.906 ++ NET_TYPE=phy 00:01:17.906 ++ RUN_NIGHTLY=0 00:01:17.906 + case $SPDK_TEST_NVMF_NICS in 00:01:17.906 + DRIVERS=ice 00:01:17.906 + [[ tcp == \r\d\m\a ]] 00:01:17.906 + [[ -n ice ]] 00:01:17.906 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.906 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.906 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.906 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.906 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.906 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.906 + true 00:01:17.906 + for D in $DRIVERS 00:01:17.906 + sudo modprobe ice 00:01:17.906 + exit 0 00:01:17.913 [Pipeline] } 00:01:17.922 [Pipeline] // withEnv 00:01:17.925 [Pipeline] } 00:01:17.932 [Pipeline] // stage 00:01:17.938 [Pipeline] catchError 00:01:17.939 [Pipeline] { 00:01:17.946 [Pipeline] timeout 00:01:17.947 Timeout set to expire in 1 hr 0 min 00:01:17.948 [Pipeline] { 00:01:17.957 [Pipeline] stage 00:01:17.959 [Pipeline] { (Tests) 00:01:17.970 [Pipeline] sh 00:01:18.255 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.255 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.255 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.255 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:18.255 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.255 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.255 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:18.255 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.255 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.255 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.255 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:18.255 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.255 + source /etc/os-release 00:01:18.255 ++ NAME='Fedora Linux' 00:01:18.255 ++ VERSION='39 (Cloud Edition)' 00:01:18.255 ++ ID=fedora 00:01:18.255 ++ VERSION_ID=39 00:01:18.255 ++ VERSION_CODENAME= 00:01:18.255 ++ PLATFORM_ID=platform:f39 00:01:18.255 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.255 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.255 ++ LOGO=fedora-logo-icon 00:01:18.255 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.255 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.255 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.255 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.255 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.255 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.255 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.255 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.255 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.255 ++ SUPPORT_END=2024-11-12 00:01:18.255 ++ VARIANT='Cloud Edition' 00:01:18.255 ++ VARIANT_ID=cloud 00:01:18.255 + uname -a 00:01:18.255 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.255 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:20.797 Hugepages 00:01:20.797 node hugesize free / total 00:01:20.797 node0 1048576kB 0 / 0 00:01:20.797 node0 2048kB 0 / 0 00:01:20.797 node1 1048576kB 0 / 0 00:01:20.797 node1 2048kB 0 / 0 00:01:20.797 00:01:20.797 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.797 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:20.797 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:20.797 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:20.797 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:20.797 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:20.797 + rm -f /tmp/spdk-ld-path 00:01:20.797 + source autorun-spdk.conf 00:01:20.797 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.797 ++ SPDK_TEST_NVMF=1 00:01:20.797 ++ SPDK_TEST_NVME_CLI=1 00:01:20.797 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.797 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.797 ++ SPDK_TEST_VFIOUSER=1 00:01:20.797 ++ SPDK_RUN_UBSAN=1 00:01:20.797 ++ NET_TYPE=phy 00:01:20.797 ++ RUN_NIGHTLY=0 00:01:20.797 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.797 + [[ -n '' ]] 00:01:20.797 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.797 + for M in /var/spdk/build-*-manifest.txt 00:01:20.797 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:20.797 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.797 + for M in /var/spdk/build-*-manifest.txt 00:01:20.797 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.797 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.797 + for M in /var/spdk/build-*-manifest.txt 00:01:20.797 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.797 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.797 ++ uname 00:01:20.797 + [[ Linux == \L\i\n\u\x ]] 00:01:20.797 + sudo dmesg -T 00:01:20.797 + sudo dmesg --clear 00:01:21.058 + dmesg_pid=942540 00:01:21.058 + [[ Fedora Linux == FreeBSD ]] 00:01:21.058 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.058 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.058 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.058 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.058 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.058 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.058 + sudo dmesg -Tw 00:01:21.058 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.058 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.058 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.058 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.058 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.058 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.058 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.058 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.058 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.058 Test configuration: 00:01:21.058 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.058 SPDK_TEST_NVMF=1 00:01:21.058 SPDK_TEST_NVME_CLI=1 00:01:21.058 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.058 SPDK_TEST_NVMF_NICS=e810 00:01:21.058 SPDK_TEST_VFIOUSER=1 00:01:21.058 SPDK_RUN_UBSAN=1 00:01:21.058 NET_TYPE=phy 00:01:21.058 RUN_NIGHTLY=0 12:41:41 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:21.058 12:41:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:21.058 12:41:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.058 12:41:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.058 12:41:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.058 12:41:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.058 12:41:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.058 12:41:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.058 12:41:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.058 12:41:41 -- paths/export.sh@5 -- $ export PATH 00:01:21.058 12:41:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.058 12:41:41 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:21.058 12:41:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:21.058 12:41:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728988901.XXXXXX 00:01:21.058 12:41:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728988901.phRx20 00:01:21.058 12:41:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:21.058 12:41:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:21.058 12:41:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:21.058 12:41:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.058 12:41:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.058 12:41:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:21.058 12:41:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:21.058 12:41:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.058 12:41:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:21.058 12:41:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:21.058 12:41:41 -- pm/common@17 -- $ local monitor 00:01:21.058 12:41:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.058 12:41:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.058 12:41:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.058 12:41:41 -- pm/common@21 -- $ date +%s 00:01:21.058 12:41:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.058 12:41:41 -- pm/common@21 -- $ date +%s 00:01:21.058 12:41:41 -- pm/common@25 -- $ sleep 1 00:01:21.058 12:41:41 -- pm/common@21 -- $ date +%s 00:01:21.058 12:41:41 -- pm/common@21 -- $ date +%s 00:01:21.058 12:41:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728988901 00:01:21.058 12:41:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728988901 00:01:21.058 12:41:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728988901 00:01:21.058 12:41:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728988901 00:01:21.058 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728988901_collect-vmstat.pm.log 00:01:21.058 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728988901_collect-cpu-load.pm.log 00:01:21.058 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728988901_collect-cpu-temp.pm.log 00:01:21.058 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728988901_collect-bmc-pm.bmc.pm.log 00:01:21.998 12:41:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:21.998 12:41:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.998 12:41:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.998 12:41:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.998 12:41:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.998 Tue Oct 15 10:41:42 AM UTC 2024 00:01:21.998 12:41:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.998 v25.01-pre-73-g96764f31c 00:01:21.998 12:41:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.998 12:41:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.998 12:41:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.998 12:41:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:21.998 12:41:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:21.998 12:41:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.259 ************************************ 00:01:22.259 START TEST ubsan 00:01:22.259 ************************************ 00:01:22.259 12:41:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:22.259 using ubsan 00:01:22.259 00:01:22.259 real 0m0.000s 00:01:22.259 user 0m0.000s 00:01:22.259 sys 0m0.000s 00:01:22.259 12:41:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:22.259 12:41:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.259 ************************************ 00:01:22.259 END TEST ubsan 00:01:22.259 ************************************ 00:01:22.259 12:41:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.259 12:41:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.259 12:41:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.259 12:41:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:22.259 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:22.259 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.519 Using 'verbs' RDMA provider 00:01:35.705 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:47.919 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.919 Creating mk/config.mk...done. 00:01:47.919 Creating mk/cc.flags.mk...done. 00:01:47.919 Type 'make' to build. 00:01:47.919 12:42:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:47.919 12:42:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:47.919 12:42:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.919 12:42:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.919 ************************************ 00:01:47.919 START TEST make 00:01:47.919 ************************************ 00:01:47.919 12:42:07 make -- common/autotest_common.sh@1125 -- $ make -j96 00:01:47.919 make[1]: Nothing to be done for 'all'. 00:01:49.310 The Meson build system 00:01:49.310 Version: 1.5.0 00:01:49.310 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:49.310 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.310 Build type: native build 00:01:49.310 Project name: libvfio-user 00:01:49.310 Project version: 0.0.1 00:01:49.310 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:49.310 C linker for the host machine: cc ld.bfd 2.40-14 00:01:49.310 Host machine cpu family: x86_64 00:01:49.310 Host machine cpu: x86_64 00:01:49.310 Run-time dependency threads found: YES 00:01:49.310 Library dl found: YES 00:01:49.310 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:49.310 Run-time dependency json-c found: YES 0.17 00:01:49.310 Run-time dependency cmocka found: YES 1.1.7 00:01:49.310 Program pytest-3 found: NO 00:01:49.310 Program flake8 found: NO 00:01:49.310 Program misspell-fixer found: NO 00:01:49.310 Program restructuredtext-lint found: NO 00:01:49.310 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.310 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.310 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.310 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.310 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.310 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.310 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.310 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.310 Build targets in project: 8 00:01:49.310 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.310 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.310 00:01:49.310 libvfio-user 0.0.1 00:01:49.310 00:01:49.310 User defined options 00:01:49.310 buildtype : debug 00:01:49.310 default_library: shared 00:01:49.310 libdir : /usr/local/lib 00:01:49.310 00:01:49.310 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.248 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.248 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:50.248 [2/37] Compiling C object samples/null.p/null.c.o 00:01:50.248 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:50.248 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:50.248 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:50.248 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:50.248 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:50.248 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:50.248 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:50.248 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:50.248 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:50.248 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:50.248 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:50.248 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:50.248 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:50.248 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:50.248 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:50.248 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:50.248 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:50.248 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:50.248 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:50.248 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:50.248 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:50.248 [24/37] Compiling C object samples/server.p/server.c.o 00:01:50.248 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:50.248 [26/37] Compiling C object samples/client.p/client.c.o 00:01:50.248 [27/37] Linking target samples/client 00:01:50.248 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:50.248 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.248 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:50.248 [31/37] Linking target test/unit_tests 00:01:50.507 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.507 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:50.507 [34/37] Linking target samples/lspci 00:01:50.507 [35/37] Linking target samples/server 00:01:50.507 [36/37] Linking target samples/null 00:01:50.507 [37/37] Linking target samples/gpio-pci-idio-16 00:01:50.507 INFO: autodetecting backend as ninja 00:01:50.507 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.507 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.076 ninja: no work to do. 00:01:56.376 The Meson build system 00:01:56.376 Version: 1.5.0 00:01:56.376 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:56.376 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:56.376 Build type: native build 00:01:56.376 Program cat found: YES (/usr/bin/cat) 00:01:56.376 Project name: DPDK 00:01:56.376 Project version: 24.03.0 00:01:56.376 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.376 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.376 Host machine cpu family: x86_64 00:01:56.376 Host machine cpu: x86_64 00:01:56.376 Message: ## Building in Developer Mode ## 00:01:56.376 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.376 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.376 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.376 Program python3 found: YES (/usr/bin/python3) 00:01:56.376 Program cat found: YES (/usr/bin/cat) 00:01:56.376 Compiler for C supports arguments -march=native: YES 00:01:56.376 Checking for size of "void *" : 8 00:01:56.376 Checking for size of "void *" : 8 (cached) 00:01:56.376 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:56.376 Library m found: YES 00:01:56.376 Library numa found: YES 00:01:56.376 Has header "numaif.h" : YES 00:01:56.376 Library fdt found: NO 00:01:56.376 Library execinfo found: NO 00:01:56.376 Has header "execinfo.h" : YES 00:01:56.376 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.376 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.376 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.376 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.376 Run-time dependency openssl found: YES 3.1.1 00:01:56.376 Run-time dependency libpcap found: YES 1.10.4 00:01:56.376 Has header "pcap.h" with dependency libpcap: YES 00:01:56.376 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.376 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.376 Compiler for C supports arguments -Wformat: YES 00:01:56.376 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.376 Compiler for C supports arguments -Wformat-security: NO 00:01:56.376 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.376 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.376 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.376 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.376 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.376 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.376 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.376 Compiler for C supports arguments -Wundef: YES 00:01:56.376 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.376 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.376 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.376 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.376 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.376 Program objdump found: YES (/usr/bin/objdump) 00:01:56.376 Compiler for C supports arguments -mavx512f: YES 00:01:56.376 Checking if "AVX512 checking" compiles: YES 00:01:56.376 Fetching value of define "__SSE4_2__" : 1 00:01:56.376 Fetching value of define "__AES__" : 1 00:01:56.376 Fetching value of define "__AVX__" : 1 00:01:56.376 Fetching value of define "__AVX2__" : 1 00:01:56.376 Fetching value of define "__AVX512BW__" : 1 00:01:56.376 Fetching value of define "__AVX512CD__" : 1 00:01:56.376 Fetching value of define "__AVX512DQ__" : 1 00:01:56.376 Fetching value of define "__AVX512F__" : 1 00:01:56.376 Fetching value of define "__AVX512VL__" : 1 00:01:56.376 Fetching value of define "__PCLMUL__" : 1 00:01:56.376 Fetching value of define "__RDRND__" : 1 00:01:56.376 Fetching value of define "__RDSEED__" : 1 00:01:56.376 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.376 Fetching value of define "__znver1__" : (undefined) 00:01:56.376 Fetching value of define "__znver2__" : (undefined) 00:01:56.376 Fetching value of define "__znver3__" : (undefined) 00:01:56.376 Fetching value of define "__znver4__" : (undefined) 00:01:56.376 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.376 Message: lib/log: Defining dependency "log" 00:01:56.376 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.376 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.376 Checking for function "getentropy" : NO 00:01:56.376 Message: lib/eal: Defining dependency "eal" 00:01:56.376 Message: lib/ring: Defining dependency "ring" 00:01:56.376 Message: lib/rcu: Defining dependency "rcu" 00:01:56.376 Message: lib/mempool: Defining dependency "mempool" 00:01:56.376 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.376 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.376 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.376 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.376 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.376 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.376 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:56.376 Compiler for C supports arguments -mpclmul: YES 00:01:56.376 Compiler for C supports arguments -maes: YES 00:01:56.376 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.376 Compiler for C supports arguments -mavx512bw: YES 00:01:56.376 Compiler for C supports arguments -mavx512dq: YES 00:01:56.376 Compiler for C supports arguments -mavx512vl: YES 00:01:56.376 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.376 Compiler for C supports arguments -mavx2: YES 00:01:56.376 Compiler for C supports arguments -mavx: YES 00:01:56.376 Message: lib/net: Defining dependency "net" 00:01:56.376 Message: lib/meter: Defining dependency "meter" 00:01:56.376 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.376 Message: lib/pci: Defining dependency "pci" 00:01:56.376 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.376 Message: lib/hash: Defining dependency "hash" 00:01:56.376 Message: lib/timer: Defining dependency "timer" 00:01:56.376 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.376 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.376 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.376 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.376 Message: lib/power: Defining dependency "power" 00:01:56.376 Message: lib/reorder: Defining dependency "reorder" 00:01:56.376 Message: lib/security: Defining dependency "security" 00:01:56.376 Has header "linux/userfaultfd.h" : YES 00:01:56.376 Has header "linux/vduse.h" : YES 00:01:56.376 Message: lib/vhost: Defining dependency "vhost" 00:01:56.376 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.376 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.376 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.376 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.376 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.376 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.376 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.376 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.376 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.376 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.376 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:56.376 Configuring doxy-api-html.conf using configuration 00:01:56.376 Configuring doxy-api-man.conf using configuration 00:01:56.376 Program mandb found: YES (/usr/bin/mandb) 00:01:56.376 Program sphinx-build found: NO 00:01:56.376 Configuring rte_build_config.h using configuration 00:01:56.376 Message: 00:01:56.376 ================= 00:01:56.376 Applications Enabled 00:01:56.376 ================= 00:01:56.376 00:01:56.376 apps: 00:01:56.376 00:01:56.376 00:01:56.376 Message: 00:01:56.376 ================= 00:01:56.376 Libraries Enabled 00:01:56.376 ================= 00:01:56.376 00:01:56.376 libs: 00:01:56.376 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.376 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.376 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.376 00:01:56.376 Message: 00:01:56.376 =============== 00:01:56.376 Drivers Enabled 00:01:56.376 =============== 00:01:56.376 00:01:56.376 common: 00:01:56.376 00:01:56.376 bus: 00:01:56.376 pci, vdev, 00:01:56.376 mempool: 00:01:56.376 ring, 00:01:56.376 dma: 00:01:56.376 00:01:56.376 net: 00:01:56.376 00:01:56.376 crypto: 00:01:56.376 00:01:56.376 compress: 00:01:56.376 00:01:56.376 vdpa: 00:01:56.376 00:01:56.376 00:01:56.376 Message: 00:01:56.376 ================= 00:01:56.376 Content Skipped 00:01:56.376 ================= 00:01:56.376 00:01:56.376 apps: 00:01:56.376 dumpcap: explicitly disabled via build config 00:01:56.376 graph: explicitly disabled via build config 00:01:56.377 pdump: explicitly disabled via build config 00:01:56.377 proc-info: explicitly disabled via build config 00:01:56.377 test-acl: explicitly disabled via build config 00:01:56.377 test-bbdev: explicitly disabled via build config 00:01:56.377 test-cmdline: explicitly disabled via build config 00:01:56.377 test-compress-perf: explicitly disabled via build config 00:01:56.377 test-crypto-perf: explicitly disabled via build config 00:01:56.377 test-dma-perf: explicitly disabled via build config 00:01:56.377 test-eventdev: explicitly disabled via build config 00:01:56.377 test-fib: explicitly disabled via build config 00:01:56.377 test-flow-perf: explicitly disabled via build config 00:01:56.377 test-gpudev: explicitly disabled via build config 00:01:56.377 test-mldev: explicitly disabled via build config 00:01:56.377 test-pipeline: explicitly disabled via build config 00:01:56.377 test-pmd: explicitly disabled via build config 00:01:56.377 test-regex: explicitly disabled via build config 00:01:56.377 test-sad: explicitly disabled via build config 00:01:56.377 test-security-perf: explicitly disabled via build config 00:01:56.377 00:01:56.377 libs: 00:01:56.377 argparse: explicitly disabled via build config 00:01:56.377 metrics: explicitly disabled via build config 00:01:56.377 acl: explicitly disabled via build config 00:01:56.377 bbdev: explicitly disabled via build config 00:01:56.377 bitratestats: explicitly disabled via build config 00:01:56.377 bpf: explicitly disabled via build config 00:01:56.377 cfgfile: explicitly disabled via build config 00:01:56.377 distributor: explicitly disabled via build config 00:01:56.377 efd: explicitly disabled via build config 00:01:56.377 eventdev: explicitly disabled via build config 00:01:56.377 dispatcher: explicitly disabled via build config 00:01:56.377 gpudev: explicitly disabled via build config 00:01:56.377 gro: explicitly disabled via build config 00:01:56.377 gso: explicitly disabled via build config 00:01:56.377 ip_frag: explicitly disabled via build config 00:01:56.377 jobstats: explicitly disabled via build config 00:01:56.377 latencystats: explicitly disabled via build config 00:01:56.377 lpm: explicitly disabled via build config 00:01:56.377 member: explicitly disabled via build config 00:01:56.377 pcapng: explicitly disabled via build config 00:01:56.377 rawdev: explicitly disabled via build config 00:01:56.377 regexdev: explicitly disabled via build config 00:01:56.377 mldev: explicitly disabled via build config 00:01:56.377 rib: explicitly disabled via build config 00:01:56.377 sched: explicitly disabled via build config 00:01:56.377 stack: explicitly disabled via build config 00:01:56.377 ipsec: explicitly disabled via build config 00:01:56.377 pdcp: explicitly disabled via build config 00:01:56.377 fib: explicitly disabled via build config 00:01:56.377 port: explicitly disabled via build config 00:01:56.377 pdump: explicitly disabled via build config 00:01:56.377 table: explicitly disabled via build config 00:01:56.377 pipeline: explicitly disabled via build config 00:01:56.377 graph: explicitly disabled via build config 00:01:56.377 node: explicitly disabled via build config 00:01:56.377 00:01:56.377 drivers: 00:01:56.377 common/cpt: not in enabled drivers build config 00:01:56.377 common/dpaax: not in enabled drivers build config 00:01:56.377 common/iavf: not in enabled drivers build config 00:01:56.377 common/idpf: not in enabled drivers build config 00:01:56.377 common/ionic: not in enabled drivers build config 00:01:56.377 common/mvep: not in enabled drivers build config 00:01:56.377 common/octeontx: not in enabled drivers build config 00:01:56.377 bus/auxiliary: not in enabled drivers build config 00:01:56.377 bus/cdx: not in enabled drivers build config 00:01:56.377 bus/dpaa: not in enabled drivers build config 00:01:56.377 bus/fslmc: not in enabled drivers build config 00:01:56.377 bus/ifpga: not in enabled drivers build config 00:01:56.377 bus/platform: not in enabled drivers build config 00:01:56.377 bus/uacce: not in enabled drivers build config 00:01:56.377 bus/vmbus: not in enabled drivers build config 00:01:56.377 common/cnxk: not in enabled drivers build config 00:01:56.377 common/mlx5: not in enabled drivers build config 00:01:56.377 common/nfp: not in enabled drivers build config 00:01:56.377 common/nitrox: not in enabled drivers build config 00:01:56.377 common/qat: not in enabled drivers build config 00:01:56.377 common/sfc_efx: not in enabled drivers build config 00:01:56.377 mempool/bucket: not in enabled drivers build config 00:01:56.377 mempool/cnxk: not in enabled drivers build config 00:01:56.377 mempool/dpaa: not in enabled drivers build config 00:01:56.377 mempool/dpaa2: not in enabled drivers build config 00:01:56.377 mempool/octeontx: not in enabled drivers build config 00:01:56.377 mempool/stack: not in enabled drivers build config 00:01:56.377 dma/cnxk: not in enabled drivers build config 00:01:56.377 dma/dpaa: not in enabled drivers build config 00:01:56.377 dma/dpaa2: not in enabled drivers build config 00:01:56.377 dma/hisilicon: not in enabled drivers build config 00:01:56.377 dma/idxd: not in enabled drivers build config 00:01:56.377 dma/ioat: not in enabled drivers build config 00:01:56.377 dma/skeleton: not in enabled drivers build config 00:01:56.377 net/af_packet: not in enabled drivers build config 00:01:56.377 net/af_xdp: not in enabled drivers build config 00:01:56.377 net/ark: not in enabled drivers build config 00:01:56.377 net/atlantic: not in enabled drivers build config 00:01:56.377 net/avp: not in enabled drivers build config 00:01:56.377 net/axgbe: not in enabled drivers build config 00:01:56.377 net/bnx2x: not in enabled drivers build config 00:01:56.377 net/bnxt: not in enabled drivers build config 00:01:56.377 net/bonding: not in enabled drivers build config 00:01:56.377 net/cnxk: not in enabled drivers build config 00:01:56.377 net/cpfl: not in enabled drivers build config 00:01:56.377 net/cxgbe: not in enabled drivers build config 00:01:56.377 net/dpaa: not in enabled drivers build config 00:01:56.377 net/dpaa2: not in enabled drivers build config 00:01:56.377 net/e1000: not in enabled drivers build config 00:01:56.377 net/ena: not in enabled drivers build config 00:01:56.377 net/enetc: not in enabled drivers build config 00:01:56.377 net/enetfec: not in enabled drivers build config 00:01:56.377 net/enic: not in enabled drivers build config 00:01:56.377 net/failsafe: not in enabled drivers build config 00:01:56.377 net/fm10k: not in enabled drivers build config 00:01:56.377 net/gve: not in enabled drivers build config 00:01:56.377 net/hinic: not in enabled drivers build config 00:01:56.377 net/hns3: not in enabled drivers build config 00:01:56.377 net/i40e: not in enabled drivers build config 00:01:56.377 net/iavf: not in enabled drivers build config 00:01:56.377 net/ice: not in enabled drivers build config 00:01:56.377 net/idpf: not in enabled drivers build config 00:01:56.377 net/igc: not in enabled drivers build config 00:01:56.377 net/ionic: not in enabled drivers build config 00:01:56.377 net/ipn3ke: not in enabled drivers build config 00:01:56.377 net/ixgbe: not in enabled drivers build config 00:01:56.377 net/mana: not in enabled drivers build config 00:01:56.377 net/memif: not in enabled drivers build config 00:01:56.377 net/mlx4: not in enabled drivers build config 00:01:56.377 net/mlx5: not in enabled drivers build config 00:01:56.377 net/mvneta: not in enabled drivers build config 00:01:56.377 net/mvpp2: not in enabled drivers build config 00:01:56.377 net/netvsc: not in enabled drivers build config 00:01:56.377 net/nfb: not in enabled drivers build config 00:01:56.377 net/nfp: not in enabled drivers build config 00:01:56.377 net/ngbe: not in enabled drivers build config 00:01:56.377 net/null: not in enabled drivers build config 00:01:56.377 net/octeontx: not in enabled drivers build config 00:01:56.377 net/octeon_ep: not in enabled drivers build config 00:01:56.377 net/pcap: not in enabled drivers build config 00:01:56.377 net/pfe: not in enabled drivers build config 00:01:56.377 net/qede: not in enabled drivers build config 00:01:56.377 net/ring: not in enabled drivers build config 00:01:56.377 net/sfc: not in enabled drivers build config 00:01:56.377 net/softnic: not in enabled drivers build config 00:01:56.377 net/tap: not in enabled drivers build config 00:01:56.377 net/thunderx: not in enabled drivers build config 00:01:56.377 net/txgbe: not in enabled drivers build config 00:01:56.377 net/vdev_netvsc: not in enabled drivers build config 00:01:56.377 net/vhost: not in enabled drivers build config 00:01:56.377 net/virtio: not in enabled drivers build config 00:01:56.377 net/vmxnet3: not in enabled drivers build config 00:01:56.377 raw/*: missing internal dependency, "rawdev" 00:01:56.377 crypto/armv8: not in enabled drivers build config 00:01:56.377 crypto/bcmfs: not in enabled drivers build config 00:01:56.377 crypto/caam_jr: not in enabled drivers build config 00:01:56.377 crypto/ccp: not in enabled drivers build config 00:01:56.377 crypto/cnxk: not in enabled drivers build config 00:01:56.377 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.377 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.377 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.377 crypto/mlx5: not in enabled drivers build config 00:01:56.377 crypto/mvsam: not in enabled drivers build config 00:01:56.377 crypto/nitrox: not in enabled drivers build config 00:01:56.377 crypto/null: not in enabled drivers build config 00:01:56.377 crypto/octeontx: not in enabled drivers build config 00:01:56.377 crypto/openssl: not in enabled drivers build config 00:01:56.377 crypto/scheduler: not in enabled drivers build config 00:01:56.377 crypto/uadk: not in enabled drivers build config 00:01:56.377 crypto/virtio: not in enabled drivers build config 00:01:56.377 compress/isal: not in enabled drivers build config 00:01:56.377 compress/mlx5: not in enabled drivers build config 00:01:56.377 compress/nitrox: not in enabled drivers build config 00:01:56.377 compress/octeontx: not in enabled drivers build config 00:01:56.377 compress/zlib: not in enabled drivers build config 00:01:56.377 regex/*: missing internal dependency, "regexdev" 00:01:56.377 ml/*: missing internal dependency, "mldev" 00:01:56.377 vdpa/ifc: not in enabled drivers build config 00:01:56.377 vdpa/mlx5: not in enabled drivers build config 00:01:56.377 vdpa/nfp: not in enabled drivers build config 00:01:56.377 vdpa/sfc: not in enabled drivers build config 00:01:56.377 event/*: missing internal dependency, "eventdev" 00:01:56.377 baseband/*: missing internal dependency, "bbdev" 00:01:56.377 gpu/*: missing internal dependency, "gpudev" 00:01:56.377 00:01:56.377 00:01:56.377 Build targets in project: 85 00:01:56.377 00:01:56.377 DPDK 24.03.0 00:01:56.377 00:01:56.377 User defined options 00:01:56.377 buildtype : debug 00:01:56.377 default_library : shared 00:01:56.377 libdir : lib 00:01:56.377 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:56.378 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:56.378 c_link_args : 00:01:56.378 cpu_instruction_set: native 00:01:56.378 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:56.378 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:56.378 enable_docs : false 00:01:56.378 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:56.378 enable_kmods : false 00:01:56.378 max_lcores : 128 00:01:56.378 tests : false 00:01:56.378 00:01:56.378 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.646 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.646 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.646 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.646 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.646 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.908 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.908 [6/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.908 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.908 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.908 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.908 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.908 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.908 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.908 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.908 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.908 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.908 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.908 [17/268] Linking static target lib/librte_kvargs.a 00:01:56.908 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.908 [19/268] Linking static target lib/librte_log.a 00:01:56.908 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.908 [21/268] Linking static target lib/librte_pci.a 00:01:56.908 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.908 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.908 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.174 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.174 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.174 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.174 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.174 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.174 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.174 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.174 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.174 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.174 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.174 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.174 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.174 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.174 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.174 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.174 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.174 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.174 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.174 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:57.174 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.174 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.174 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.174 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.174 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.174 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.174 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.174 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.174 [52/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.174 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.174 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.434 [55/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:57.434 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.434 [57/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.434 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.434 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.434 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.434 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.434 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.434 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.434 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.434 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.434 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.434 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.434 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.434 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.434 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.434 [71/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.434 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.434 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.434 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.434 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.434 [76/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.434 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.434 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:57.434 [79/268] Linking static target lib/librte_meter.a 00:01:57.434 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.434 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.434 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.434 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.434 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.434 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.434 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.434 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:57.434 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.434 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.434 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.434 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.434 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.434 [93/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.434 [94/268] Linking static target lib/librte_ring.a 00:01:57.434 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.434 [96/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.434 [97/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.434 [98/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.434 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.434 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.434 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.434 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.434 [103/268] Linking static target lib/librte_telemetry.a 00:01:57.434 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.434 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.434 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.434 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.434 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.434 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.434 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.434 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.434 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:57.434 [113/268] Linking static target lib/librte_rcu.a 00:01:57.434 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.434 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.434 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.434 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.434 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.434 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.434 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.434 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.434 [122/268] Linking static target lib/librte_eal.a 00:01:57.434 [123/268] Linking static target lib/librte_mempool.a 00:01:57.434 [124/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.434 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:57.435 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.435 [127/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.435 [128/268] Linking static target lib/librte_net.a 00:01:57.694 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.694 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.694 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.694 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:57.694 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.694 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.694 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.694 [136/268] Linking static target lib/librte_cmdline.a 00:01:57.694 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.694 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.694 [139/268] Linking static target lib/librte_mbuf.a 00:01:57.694 [140/268] Linking target lib/librte_log.so.24.1 00:01:57.694 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.694 [142/268] Linking static target lib/librte_timer.a 00:01:57.694 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.694 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.694 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.694 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.694 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.694 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.694 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:57.694 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.694 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.694 [152/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.694 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.694 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:57.694 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.694 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.953 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.953 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.953 [159/268] Linking target lib/librte_kvargs.so.24.1 00:01:57.953 [160/268] Linking static target lib/librte_reorder.a 00:01:57.953 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.953 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.953 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.953 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.953 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.953 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.953 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.953 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.953 [169/268] Linking static target lib/librte_dmadev.a 00:01:57.953 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.953 [171/268] Linking target lib/librte_telemetry.so.24.1 00:01:57.953 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.953 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.953 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.953 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.953 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.953 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.953 [178/268] Linking static target lib/librte_compressdev.a 00:01:57.953 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.953 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.953 [181/268] Linking static target lib/librte_power.a 00:01:57.953 [182/268] Linking static target lib/librte_security.a 00:01:57.953 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.953 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.953 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.953 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.953 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.953 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:57.953 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.953 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.953 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.953 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:57.953 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.212 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.212 [195/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.212 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.212 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.212 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.212 [199/268] Linking static target lib/librte_hash.a 00:01:58.212 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.212 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.212 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.212 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.212 [204/268] Linking static target drivers/librte_bus_pci.a 00:01:58.212 [205/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.212 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.212 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.212 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.212 [209/268] Linking static target drivers/librte_mempool_ring.a 00:01:58.212 [210/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.212 [211/268] Linking static target drivers/librte_bus_vdev.a 00:01:58.212 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.212 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.212 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.212 [215/268] Linking static target lib/librte_cryptodev.a 00:01:58.471 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.471 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.471 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.471 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.730 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.730 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.730 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.730 [223/268] Linking static target lib/librte_ethdev.a 00:01:58.730 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.730 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.557 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.816 [229/268] Linking static target lib/librte_vhost.a 00:02:00.076 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.454 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.730 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.300 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.558 [234/268] Linking target lib/librte_eal.so.24.1 00:02:07.558 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.558 [236/268] Linking target lib/librte_meter.so.24.1 00:02:07.558 [237/268] Linking target lib/librte_ring.so.24.1 00:02:07.558 [238/268] Linking target lib/librte_pci.so.24.1 00:02:07.558 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:07.558 [240/268] Linking target lib/librte_timer.so.24.1 00:02:07.558 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.817 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.817 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.817 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.817 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.817 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.817 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:07.817 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:07.817 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.817 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.817 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:08.077 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:08.077 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:08.077 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:08.077 [255/268] Linking target lib/librte_net.so.24.1 00:02:08.077 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:08.077 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:08.077 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:08.336 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:08.336 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:08.336 [261/268] Linking target lib/librte_security.so.24.1 00:02:08.336 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:08.336 [263/268] Linking target lib/librte_hash.so.24.1 00:02:08.336 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:08.595 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.595 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.595 [267/268] Linking target lib/librte_power.so.24.1 00:02:08.595 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:08.595 INFO: autodetecting backend as ninja 00:02:08.595 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:20.822 CC lib/ut_mock/mock.o 00:02:20.822 CC lib/log/log.o 00:02:20.822 CC lib/log/log_flags.o 00:02:20.822 CC lib/log/log_deprecated.o 00:02:20.822 CC lib/ut/ut.o 00:02:20.822 LIB libspdk_log.a 00:02:20.822 LIB libspdk_ut.a 00:02:20.822 LIB libspdk_ut_mock.a 00:02:20.822 SO libspdk_ut.so.2.0 00:02:20.822 SO libspdk_log.so.7.1 00:02:20.822 SO libspdk_ut_mock.so.6.0 00:02:20.822 SYMLINK libspdk_ut.so 00:02:20.822 SYMLINK libspdk_log.so 00:02:20.822 SYMLINK libspdk_ut_mock.so 00:02:20.822 CC lib/ioat/ioat.o 00:02:20.822 CC lib/util/base64.o 00:02:20.822 CC lib/util/bit_array.o 00:02:20.822 CC lib/util/crc16.o 00:02:20.822 CC lib/util/cpuset.o 00:02:20.822 CXX lib/trace_parser/trace.o 00:02:20.822 CC lib/dma/dma.o 00:02:20.822 CC lib/util/crc32.o 00:02:20.822 CC lib/util/crc32c.o 00:02:20.822 CC lib/util/crc32_ieee.o 00:02:20.822 CC lib/util/crc64.o 00:02:20.822 CC lib/util/dif.o 00:02:20.822 CC lib/util/fd.o 00:02:20.822 CC lib/util/fd_group.o 00:02:20.822 CC lib/util/file.o 00:02:20.822 CC lib/util/hexlify.o 00:02:20.822 CC lib/util/iov.o 00:02:20.822 CC lib/util/math.o 00:02:20.822 CC lib/util/net.o 00:02:20.822 CC lib/util/pipe.o 00:02:20.822 CC lib/util/strerror_tls.o 00:02:20.822 CC lib/util/string.o 00:02:20.822 CC lib/util/uuid.o 00:02:20.822 CC lib/util/xor.o 00:02:20.822 CC lib/util/zipf.o 00:02:20.822 CC lib/util/md5.o 00:02:20.822 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.822 CC lib/vfio_user/host/vfio_user.o 00:02:20.822 LIB libspdk_dma.a 00:02:20.822 SO libspdk_dma.so.5.0 00:02:20.822 LIB libspdk_ioat.a 00:02:20.822 SYMLINK libspdk_dma.so 00:02:20.822 SO libspdk_ioat.so.7.0 00:02:20.822 SYMLINK libspdk_ioat.so 00:02:20.822 LIB libspdk_vfio_user.a 00:02:20.822 SO libspdk_vfio_user.so.5.0 00:02:20.822 LIB libspdk_util.a 00:02:20.822 SYMLINK libspdk_vfio_user.so 00:02:20.822 SO libspdk_util.so.10.0 00:02:20.822 SYMLINK libspdk_util.so 00:02:20.822 LIB libspdk_trace_parser.a 00:02:20.822 SO libspdk_trace_parser.so.6.0 00:02:20.822 SYMLINK libspdk_trace_parser.so 00:02:20.822 CC lib/vmd/vmd.o 00:02:20.822 CC lib/vmd/led.o 00:02:20.822 CC lib/rdma_provider/common.o 00:02:20.822 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.822 CC lib/json/json_parse.o 00:02:20.822 CC lib/env_dpdk/env.o 00:02:20.822 CC lib/env_dpdk/memory.o 00:02:20.822 CC lib/json/json_util.o 00:02:20.822 CC lib/env_dpdk/pci.o 00:02:20.822 CC lib/json/json_write.o 00:02:20.822 CC lib/rdma_utils/rdma_utils.o 00:02:20.822 CC lib/env_dpdk/init.o 00:02:20.822 CC lib/idxd/idxd.o 00:02:20.822 CC lib/env_dpdk/threads.o 00:02:20.822 CC lib/idxd/idxd_user.o 00:02:20.822 CC lib/env_dpdk/pci_ioat.o 00:02:20.822 CC lib/idxd/idxd_kernel.o 00:02:20.822 CC lib/env_dpdk/pci_virtio.o 00:02:20.822 CC lib/conf/conf.o 00:02:20.822 CC lib/env_dpdk/pci_vmd.o 00:02:20.822 CC lib/env_dpdk/pci_idxd.o 00:02:20.822 CC lib/env_dpdk/pci_event.o 00:02:20.822 CC lib/env_dpdk/sigbus_handler.o 00:02:20.822 CC lib/env_dpdk/pci_dpdk.o 00:02:20.822 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.822 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:20.822 LIB libspdk_rdma_provider.a 00:02:20.822 SO libspdk_rdma_provider.so.6.0 00:02:20.822 LIB libspdk_conf.a 00:02:20.822 LIB libspdk_rdma_utils.a 00:02:20.822 LIB libspdk_json.a 00:02:20.822 SO libspdk_conf.so.6.0 00:02:20.822 SYMLINK libspdk_rdma_provider.so 00:02:20.822 SO libspdk_rdma_utils.so.1.0 00:02:20.822 SO libspdk_json.so.6.0 00:02:20.822 SYMLINK libspdk_conf.so 00:02:20.822 SYMLINK libspdk_rdma_utils.so 00:02:20.822 SYMLINK libspdk_json.so 00:02:20.822 LIB libspdk_idxd.a 00:02:20.822 LIB libspdk_vmd.a 00:02:20.822 SO libspdk_idxd.so.12.1 00:02:21.082 SO libspdk_vmd.so.6.0 00:02:21.082 SYMLINK libspdk_idxd.so 00:02:21.082 SYMLINK libspdk_vmd.so 00:02:21.082 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.082 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.082 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.082 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.342 LIB libspdk_jsonrpc.a 00:02:21.342 SO libspdk_jsonrpc.so.6.0 00:02:21.342 SYMLINK libspdk_jsonrpc.so 00:02:21.602 LIB libspdk_env_dpdk.a 00:02:21.602 SO libspdk_env_dpdk.so.15.1 00:02:21.602 SYMLINK libspdk_env_dpdk.so 00:02:21.602 CC lib/rpc/rpc.o 00:02:21.862 LIB libspdk_rpc.a 00:02:21.862 SO libspdk_rpc.so.6.0 00:02:21.862 SYMLINK libspdk_rpc.so 00:02:22.430 CC lib/trace/trace.o 00:02:22.430 CC lib/trace/trace_flags.o 00:02:22.430 CC lib/trace/trace_rpc.o 00:02:22.430 CC lib/keyring/keyring.o 00:02:22.430 CC lib/keyring/keyring_rpc.o 00:02:22.430 CC lib/notify/notify.o 00:02:22.430 CC lib/notify/notify_rpc.o 00:02:22.430 LIB libspdk_notify.a 00:02:22.430 SO libspdk_notify.so.6.0 00:02:22.430 LIB libspdk_keyring.a 00:02:22.430 LIB libspdk_trace.a 00:02:22.430 SO libspdk_keyring.so.2.0 00:02:22.689 SYMLINK libspdk_notify.so 00:02:22.689 SO libspdk_trace.so.11.0 00:02:22.689 SYMLINK libspdk_keyring.so 00:02:22.689 SYMLINK libspdk_trace.so 00:02:22.949 CC lib/sock/sock.o 00:02:22.949 CC lib/sock/sock_rpc.o 00:02:22.949 CC lib/thread/thread.o 00:02:22.949 CC lib/thread/iobuf.o 00:02:23.209 LIB libspdk_sock.a 00:02:23.209 SO libspdk_sock.so.10.0 00:02:23.209 SYMLINK libspdk_sock.so 00:02:23.778 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.778 CC lib/nvme/nvme_ctrlr.o 00:02:23.778 CC lib/nvme/nvme_fabric.o 00:02:23.778 CC lib/nvme/nvme_ns_cmd.o 00:02:23.778 CC lib/nvme/nvme_ns.o 00:02:23.778 CC lib/nvme/nvme_pcie_common.o 00:02:23.778 CC lib/nvme/nvme_pcie.o 00:02:23.778 CC lib/nvme/nvme_qpair.o 00:02:23.778 CC lib/nvme/nvme.o 00:02:23.778 CC lib/nvme/nvme_quirks.o 00:02:23.778 CC lib/nvme/nvme_transport.o 00:02:23.778 CC lib/nvme/nvme_discovery.o 00:02:23.778 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.778 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.778 CC lib/nvme/nvme_tcp.o 00:02:23.778 CC lib/nvme/nvme_opal.o 00:02:23.778 CC lib/nvme/nvme_io_msg.o 00:02:23.778 CC lib/nvme/nvme_poll_group.o 00:02:23.778 CC lib/nvme/nvme_zns.o 00:02:23.778 CC lib/nvme/nvme_stubs.o 00:02:23.778 CC lib/nvme/nvme_auth.o 00:02:23.778 CC lib/nvme/nvme_cuse.o 00:02:23.778 CC lib/nvme/nvme_vfio_user.o 00:02:23.778 CC lib/nvme/nvme_rdma.o 00:02:24.038 LIB libspdk_thread.a 00:02:24.039 SO libspdk_thread.so.10.2 00:02:24.039 SYMLINK libspdk_thread.so 00:02:24.298 CC lib/init/subsystem.o 00:02:24.298 CC lib/init/json_config.o 00:02:24.298 CC lib/init/subsystem_rpc.o 00:02:24.298 CC lib/init/rpc.o 00:02:24.298 CC lib/virtio/virtio.o 00:02:24.298 CC lib/virtio/virtio_vhost_user.o 00:02:24.298 CC lib/virtio/virtio_vfio_user.o 00:02:24.298 CC lib/virtio/virtio_pci.o 00:02:24.298 CC lib/vfu_tgt/tgt_endpoint.o 00:02:24.298 CC lib/vfu_tgt/tgt_rpc.o 00:02:24.298 CC lib/blob/blobstore.o 00:02:24.298 CC lib/blob/request.o 00:02:24.298 CC lib/blob/zeroes.o 00:02:24.298 CC lib/blob/blob_bs_dev.o 00:02:24.556 CC lib/fsdev/fsdev.o 00:02:24.556 CC lib/fsdev/fsdev_io.o 00:02:24.556 CC lib/accel/accel.o 00:02:24.556 CC lib/fsdev/fsdev_rpc.o 00:02:24.556 CC lib/accel/accel_rpc.o 00:02:24.556 CC lib/accel/accel_sw.o 00:02:24.556 LIB libspdk_init.a 00:02:24.556 SO libspdk_init.so.6.0 00:02:24.814 LIB libspdk_virtio.a 00:02:24.814 LIB libspdk_vfu_tgt.a 00:02:24.814 SYMLINK libspdk_init.so 00:02:24.814 SO libspdk_virtio.so.7.0 00:02:24.814 SO libspdk_vfu_tgt.so.3.0 00:02:24.814 SYMLINK libspdk_virtio.so 00:02:24.815 SYMLINK libspdk_vfu_tgt.so 00:02:24.815 LIB libspdk_fsdev.a 00:02:25.073 SO libspdk_fsdev.so.1.0 00:02:25.073 CC lib/event/app.o 00:02:25.073 CC lib/event/reactor.o 00:02:25.073 CC lib/event/log_rpc.o 00:02:25.073 CC lib/event/app_rpc.o 00:02:25.073 CC lib/event/scheduler_static.o 00:02:25.073 SYMLINK libspdk_fsdev.so 00:02:25.331 LIB libspdk_accel.a 00:02:25.331 SO libspdk_accel.so.16.0 00:02:25.332 LIB libspdk_nvme.a 00:02:25.332 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:25.332 SYMLINK libspdk_accel.so 00:02:25.332 LIB libspdk_event.a 00:02:25.332 SO libspdk_nvme.so.15.0 00:02:25.332 SO libspdk_event.so.14.0 00:02:25.590 SYMLINK libspdk_event.so 00:02:25.590 SYMLINK libspdk_nvme.so 00:02:25.590 CC lib/bdev/bdev.o 00:02:25.590 CC lib/bdev/bdev_rpc.o 00:02:25.590 CC lib/bdev/bdev_zone.o 00:02:25.590 CC lib/bdev/part.o 00:02:25.590 CC lib/bdev/scsi_nvme.o 00:02:25.850 LIB libspdk_fuse_dispatcher.a 00:02:25.850 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.850 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.787 LIB libspdk_blob.a 00:02:26.787 SO libspdk_blob.so.11.0 00:02:26.787 SYMLINK libspdk_blob.so 00:02:27.045 CC lib/lvol/lvol.o 00:02:27.045 CC lib/blobfs/blobfs.o 00:02:27.045 CC lib/blobfs/tree.o 00:02:27.614 LIB libspdk_bdev.a 00:02:27.614 SO libspdk_bdev.so.17.0 00:02:27.614 SYMLINK libspdk_bdev.so 00:02:27.614 LIB libspdk_blobfs.a 00:02:27.614 SO libspdk_blobfs.so.10.0 00:02:27.614 LIB libspdk_lvol.a 00:02:27.614 SO libspdk_lvol.so.10.0 00:02:27.614 SYMLINK libspdk_blobfs.so 00:02:27.872 SYMLINK libspdk_lvol.so 00:02:27.872 CC lib/ftl/ftl_core.o 00:02:27.872 CC lib/ftl/ftl_init.o 00:02:27.872 CC lib/ftl/ftl_layout.o 00:02:27.872 CC lib/ftl/ftl_debug.o 00:02:27.872 CC lib/ftl/ftl_io.o 00:02:27.872 CC lib/nvmf/ctrlr.o 00:02:27.872 CC lib/ftl/ftl_sb.o 00:02:27.872 CC lib/ftl/ftl_l2p.o 00:02:27.872 CC lib/ftl/ftl_l2p_flat.o 00:02:27.872 CC lib/nvmf/ctrlr_discovery.o 00:02:27.872 CC lib/ftl/ftl_nv_cache.o 00:02:27.872 CC lib/ftl/ftl_band.o 00:02:27.872 CC lib/nvmf/ctrlr_bdev.o 00:02:27.872 CC lib/ftl/ftl_band_ops.o 00:02:27.872 CC lib/nvmf/subsystem.o 00:02:27.872 CC lib/ftl/ftl_writer.o 00:02:27.872 CC lib/ftl/ftl_rq.o 00:02:27.872 CC lib/nvmf/nvmf_rpc.o 00:02:27.872 CC lib/nvmf/nvmf.o 00:02:27.872 CC lib/ftl/ftl_reloc.o 00:02:27.872 CC lib/ftl/ftl_p2l.o 00:02:27.872 CC lib/ftl/ftl_l2p_cache.o 00:02:27.872 CC lib/nvmf/tcp.o 00:02:27.872 CC lib/nvmf/transport.o 00:02:27.872 CC lib/ftl/ftl_p2l_log.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt.o 00:02:27.872 CC lib/nvmf/stubs.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.872 CC lib/nvmf/mdns_server.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.872 CC lib/ublk/ublk.o 00:02:27.872 CC lib/nvmf/vfio_user.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.872 CC lib/nvmf/rdma.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.872 CC lib/scsi/dev.o 00:02:27.872 CC lib/ublk/ublk_rpc.o 00:02:27.872 CC lib/scsi/lun.o 00:02:27.872 CC lib/nvmf/auth.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.872 CC lib/nbd/nbd.o 00:02:27.872 CC lib/scsi/port.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.872 CC lib/scsi/scsi_bdev.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.872 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.872 CC lib/scsi/scsi_pr.o 00:02:27.872 CC lib/nbd/nbd_rpc.o 00:02:27.872 CC lib/scsi/scsi.o 00:02:27.872 CC lib/ftl/utils/ftl_conf.o 00:02:27.872 CC lib/ftl/utils/ftl_md.o 00:02:27.872 CC lib/scsi/scsi_rpc.o 00:02:27.872 CC lib/scsi/task.o 00:02:27.872 CC lib/ftl/utils/ftl_mempool.o 00:02:27.872 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.872 CC lib/ftl/utils/ftl_property.o 00:02:27.872 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.872 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.872 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:27.872 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:27.872 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:27.872 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:27.872 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:27.872 CC lib/ftl/base/ftl_base_dev.o 00:02:27.872 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:27.872 CC lib/ftl/base/ftl_base_bdev.o 00:02:27.872 CC lib/ftl/ftl_trace.o 00:02:28.438 LIB libspdk_nbd.a 00:02:28.438 LIB libspdk_scsi.a 00:02:28.438 SO libspdk_nbd.so.7.0 00:02:28.697 SO libspdk_scsi.so.9.0 00:02:28.697 LIB libspdk_ublk.a 00:02:28.697 SYMLINK libspdk_nbd.so 00:02:28.697 SO libspdk_ublk.so.3.0 00:02:28.697 SYMLINK libspdk_scsi.so 00:02:28.697 SYMLINK libspdk_ublk.so 00:02:28.956 LIB libspdk_ftl.a 00:02:28.956 CC lib/iscsi/iscsi.o 00:02:28.956 CC lib/iscsi/conn.o 00:02:28.956 CC lib/iscsi/init_grp.o 00:02:28.956 CC lib/iscsi/param.o 00:02:28.956 CC lib/iscsi/portal_grp.o 00:02:28.956 CC lib/iscsi/tgt_node.o 00:02:28.956 CC lib/iscsi/iscsi_rpc.o 00:02:28.956 CC lib/iscsi/iscsi_subsystem.o 00:02:28.956 CC lib/iscsi/task.o 00:02:28.956 CC lib/vhost/vhost.o 00:02:28.956 CC lib/vhost/vhost_rpc.o 00:02:28.956 CC lib/vhost/vhost_scsi.o 00:02:28.956 CC lib/vhost/vhost_blk.o 00:02:28.956 CC lib/vhost/rte_vhost_user.o 00:02:28.956 SO libspdk_ftl.so.9.0 00:02:29.215 SYMLINK libspdk_ftl.so 00:02:29.784 LIB libspdk_nvmf.a 00:02:29.784 SO libspdk_nvmf.so.19.1 00:02:29.784 LIB libspdk_vhost.a 00:02:29.784 SO libspdk_vhost.so.8.0 00:02:30.044 SYMLINK libspdk_vhost.so 00:02:30.044 SYMLINK libspdk_nvmf.so 00:02:30.044 LIB libspdk_iscsi.a 00:02:30.044 SO libspdk_iscsi.so.8.0 00:02:30.044 SYMLINK libspdk_iscsi.so 00:02:30.614 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.614 CC module/vfu_device/vfu_virtio.o 00:02:30.614 CC module/vfu_device/vfu_virtio_blk.o 00:02:30.614 CC module/vfu_device/vfu_virtio_scsi.o 00:02:30.614 CC module/vfu_device/vfu_virtio_rpc.o 00:02:30.614 CC module/vfu_device/vfu_virtio_fs.o 00:02:30.873 LIB libspdk_env_dpdk_rpc.a 00:02:30.873 CC module/keyring/linux/keyring.o 00:02:30.873 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:30.873 CC module/accel/iaa/accel_iaa.o 00:02:30.873 CC module/keyring/linux/keyring_rpc.o 00:02:30.873 CC module/accel/iaa/accel_iaa_rpc.o 00:02:30.873 CC module/accel/ioat/accel_ioat.o 00:02:30.873 CC module/blob/bdev/blob_bdev.o 00:02:30.873 CC module/accel/ioat/accel_ioat_rpc.o 00:02:30.873 CC module/accel/dsa/accel_dsa.o 00:02:30.873 CC module/accel/error/accel_error.o 00:02:30.873 CC module/sock/posix/posix.o 00:02:30.873 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.873 CC module/accel/error/accel_error_rpc.o 00:02:30.873 CC module/keyring/file/keyring.o 00:02:30.873 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:30.873 CC module/keyring/file/keyring_rpc.o 00:02:30.873 CC module/scheduler/gscheduler/gscheduler.o 00:02:30.873 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:30.873 CC module/fsdev/aio/fsdev_aio.o 00:02:30.873 CC module/fsdev/aio/linux_aio_mgr.o 00:02:30.873 SO libspdk_env_dpdk_rpc.so.6.0 00:02:30.873 SYMLINK libspdk_env_dpdk_rpc.so 00:02:30.873 LIB libspdk_keyring_linux.a 00:02:30.873 LIB libspdk_keyring_file.a 00:02:30.873 LIB libspdk_scheduler_gscheduler.a 00:02:30.873 SO libspdk_keyring_linux.so.1.0 00:02:31.133 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.133 LIB libspdk_accel_ioat.a 00:02:31.133 LIB libspdk_accel_error.a 00:02:31.133 SO libspdk_keyring_file.so.2.0 00:02:31.133 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:31.133 LIB libspdk_scheduler_dynamic.a 00:02:31.133 LIB libspdk_accel_iaa.a 00:02:31.133 SO libspdk_scheduler_gscheduler.so.4.0 00:02:31.133 SO libspdk_accel_ioat.so.6.0 00:02:31.133 SYMLINK libspdk_keyring_linux.so 00:02:31.133 SO libspdk_accel_error.so.2.0 00:02:31.133 SO libspdk_scheduler_dynamic.so.4.0 00:02:31.133 SO libspdk_accel_iaa.so.3.0 00:02:31.133 LIB libspdk_blob_bdev.a 00:02:31.133 SYMLINK libspdk_keyring_file.so 00:02:31.133 LIB libspdk_accel_dsa.a 00:02:31.133 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.133 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.133 SO libspdk_blob_bdev.so.11.0 00:02:31.133 SYMLINK libspdk_accel_ioat.so 00:02:31.133 SO libspdk_accel_dsa.so.5.0 00:02:31.133 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.133 SYMLINK libspdk_accel_error.so 00:02:31.133 SYMLINK libspdk_accel_iaa.so 00:02:31.133 SYMLINK libspdk_blob_bdev.so 00:02:31.133 SYMLINK libspdk_accel_dsa.so 00:02:31.133 LIB libspdk_vfu_device.a 00:02:31.133 SO libspdk_vfu_device.so.3.0 00:02:31.393 SYMLINK libspdk_vfu_device.so 00:02:31.393 LIB libspdk_fsdev_aio.a 00:02:31.393 SO libspdk_fsdev_aio.so.1.0 00:02:31.393 LIB libspdk_sock_posix.a 00:02:31.393 SO libspdk_sock_posix.so.6.0 00:02:31.393 SYMLINK libspdk_fsdev_aio.so 00:02:31.652 SYMLINK libspdk_sock_posix.so 00:02:31.652 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.652 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.652 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.652 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.652 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.652 CC module/bdev/delay/vbdev_delay.o 00:02:31.652 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.652 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.652 CC module/bdev/error/vbdev_error.o 00:02:31.652 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.652 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.652 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.652 CC module/bdev/nvme/bdev_nvme.o 00:02:31.652 CC module/bdev/ftl/bdev_ftl.o 00:02:31.652 CC module/bdev/raid/bdev_raid.o 00:02:31.652 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.652 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.652 CC module/bdev/nvme/nvme_rpc.o 00:02:31.652 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.652 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.652 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.652 CC module/bdev/raid/raid0.o 00:02:31.652 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.652 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.652 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.652 CC module/bdev/nvme/vbdev_opal.o 00:02:31.652 CC module/bdev/null/bdev_null.o 00:02:31.652 CC module/bdev/raid/raid1.o 00:02:31.652 CC module/bdev/null/bdev_null_rpc.o 00:02:31.652 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.652 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.652 CC module/bdev/raid/concat.o 00:02:31.652 CC module/bdev/split/vbdev_split.o 00:02:31.652 CC module/bdev/aio/bdev_aio.o 00:02:31.652 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.652 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.652 CC module/bdev/gpt/gpt.o 00:02:31.652 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.652 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.652 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.652 CC module/bdev/malloc/bdev_malloc.o 00:02:31.652 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.911 LIB libspdk_blobfs_bdev.a 00:02:31.912 SO libspdk_blobfs_bdev.so.6.0 00:02:31.912 LIB libspdk_bdev_error.a 00:02:31.912 LIB libspdk_bdev_split.a 00:02:31.912 SYMLINK libspdk_blobfs_bdev.so 00:02:31.912 SO libspdk_bdev_error.so.6.0 00:02:31.912 LIB libspdk_bdev_ftl.a 00:02:31.912 SO libspdk_bdev_split.so.6.0 00:02:31.912 LIB libspdk_bdev_null.a 00:02:31.912 LIB libspdk_bdev_gpt.a 00:02:31.912 SO libspdk_bdev_ftl.so.6.0 00:02:31.912 LIB libspdk_bdev_delay.a 00:02:31.912 LIB libspdk_bdev_passthru.a 00:02:31.912 SYMLINK libspdk_bdev_error.so 00:02:31.912 LIB libspdk_bdev_aio.a 00:02:31.912 LIB libspdk_bdev_zone_block.a 00:02:31.912 SO libspdk_bdev_null.so.6.0 00:02:31.912 SO libspdk_bdev_gpt.so.6.0 00:02:31.912 SYMLINK libspdk_bdev_split.so 00:02:31.912 LIB libspdk_bdev_iscsi.a 00:02:31.912 SO libspdk_bdev_aio.so.6.0 00:02:31.912 LIB libspdk_bdev_malloc.a 00:02:31.912 SO libspdk_bdev_passthru.so.6.0 00:02:31.912 SO libspdk_bdev_delay.so.6.0 00:02:32.171 SO libspdk_bdev_zone_block.so.6.0 00:02:32.172 SYMLINK libspdk_bdev_ftl.so 00:02:32.172 SO libspdk_bdev_iscsi.so.6.0 00:02:32.172 SYMLINK libspdk_bdev_null.so 00:02:32.172 SO libspdk_bdev_malloc.so.6.0 00:02:32.172 SYMLINK libspdk_bdev_gpt.so 00:02:32.172 SYMLINK libspdk_bdev_aio.so 00:02:32.172 SYMLINK libspdk_bdev_passthru.so 00:02:32.172 SYMLINK libspdk_bdev_delay.so 00:02:32.172 SYMLINK libspdk_bdev_zone_block.so 00:02:32.172 LIB libspdk_bdev_lvol.a 00:02:32.172 SYMLINK libspdk_bdev_iscsi.so 00:02:32.172 SYMLINK libspdk_bdev_malloc.so 00:02:32.172 SO libspdk_bdev_lvol.so.6.0 00:02:32.172 LIB libspdk_bdev_virtio.a 00:02:32.172 SO libspdk_bdev_virtio.so.6.0 00:02:32.172 SYMLINK libspdk_bdev_lvol.so 00:02:32.172 SYMLINK libspdk_bdev_virtio.so 00:02:32.431 LIB libspdk_bdev_raid.a 00:02:32.431 SO libspdk_bdev_raid.so.6.0 00:02:32.691 SYMLINK libspdk_bdev_raid.so 00:02:33.261 LIB libspdk_bdev_nvme.a 00:02:33.520 SO libspdk_bdev_nvme.so.7.0 00:02:33.520 SYMLINK libspdk_bdev_nvme.so 00:02:34.091 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.091 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:34.091 CC module/event/subsystems/vmd/vmd.o 00:02:34.091 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:34.091 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.091 CC module/event/subsystems/scheduler/scheduler.o 00:02:34.091 CC module/event/subsystems/keyring/keyring.o 00:02:34.091 CC module/event/subsystems/sock/sock.o 00:02:34.091 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.091 CC module/event/subsystems/fsdev/fsdev.o 00:02:34.350 LIB libspdk_event_vmd.a 00:02:34.350 LIB libspdk_event_scheduler.a 00:02:34.350 LIB libspdk_event_keyring.a 00:02:34.350 LIB libspdk_event_vfu_tgt.a 00:02:34.350 LIB libspdk_event_vhost_blk.a 00:02:34.350 LIB libspdk_event_iobuf.a 00:02:34.350 LIB libspdk_event_sock.a 00:02:34.350 LIB libspdk_event_fsdev.a 00:02:34.350 SO libspdk_event_vmd.so.6.0 00:02:34.350 SO libspdk_event_keyring.so.1.0 00:02:34.350 SO libspdk_event_scheduler.so.4.0 00:02:34.350 SO libspdk_event_iobuf.so.3.0 00:02:34.350 SO libspdk_event_vfu_tgt.so.3.0 00:02:34.350 SO libspdk_event_vhost_blk.so.3.0 00:02:34.350 SO libspdk_event_sock.so.5.0 00:02:34.350 SO libspdk_event_fsdev.so.1.0 00:02:34.350 SYMLINK libspdk_event_keyring.so 00:02:34.350 SYMLINK libspdk_event_vhost_blk.so 00:02:34.350 SYMLINK libspdk_event_vfu_tgt.so 00:02:34.350 SYMLINK libspdk_event_vmd.so 00:02:34.350 SYMLINK libspdk_event_scheduler.so 00:02:34.350 SYMLINK libspdk_event_iobuf.so 00:02:34.350 SYMLINK libspdk_event_fsdev.so 00:02:34.350 SYMLINK libspdk_event_sock.so 00:02:34.609 CC module/event/subsystems/accel/accel.o 00:02:34.869 LIB libspdk_event_accel.a 00:02:34.869 SO libspdk_event_accel.so.6.0 00:02:34.869 SYMLINK libspdk_event_accel.so 00:02:35.129 CC module/event/subsystems/bdev/bdev.o 00:02:35.388 LIB libspdk_event_bdev.a 00:02:35.388 SO libspdk_event_bdev.so.6.0 00:02:35.388 SYMLINK libspdk_event_bdev.so 00:02:35.957 CC module/event/subsystems/scsi/scsi.o 00:02:35.957 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:35.957 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:35.957 CC module/event/subsystems/nbd/nbd.o 00:02:35.957 CC module/event/subsystems/ublk/ublk.o 00:02:35.957 LIB libspdk_event_ublk.a 00:02:35.957 LIB libspdk_event_nbd.a 00:02:35.957 LIB libspdk_event_scsi.a 00:02:35.957 SO libspdk_event_ublk.so.3.0 00:02:35.957 SO libspdk_event_nbd.so.6.0 00:02:35.957 SO libspdk_event_scsi.so.6.0 00:02:35.957 LIB libspdk_event_nvmf.a 00:02:35.957 SYMLINK libspdk_event_ublk.so 00:02:35.957 SYMLINK libspdk_event_nbd.so 00:02:35.957 SYMLINK libspdk_event_scsi.so 00:02:35.957 SO libspdk_event_nvmf.so.6.0 00:02:36.217 SYMLINK libspdk_event_nvmf.so 00:02:36.476 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.476 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.476 LIB libspdk_event_vhost_scsi.a 00:02:36.477 LIB libspdk_event_iscsi.a 00:02:36.477 SO libspdk_event_iscsi.so.6.0 00:02:36.477 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.477 SYMLINK libspdk_event_iscsi.so 00:02:36.737 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.737 SO libspdk.so.6.0 00:02:36.737 SYMLINK libspdk.so 00:02:36.996 CC app/trace_record/trace_record.o 00:02:36.996 CXX app/trace/trace.o 00:02:36.996 CC app/spdk_top/spdk_top.o 00:02:36.996 CC app/spdk_nvme_discover/discovery_aer.o 00:02:36.996 CC app/spdk_lspci/spdk_lspci.o 00:02:36.996 CC app/spdk_nvme_identify/identify.o 00:02:36.996 CC test/rpc_client/rpc_client_test.o 00:02:37.265 CC app/spdk_nvme_perf/perf.o 00:02:37.265 TEST_HEADER include/spdk/accel.h 00:02:37.265 TEST_HEADER include/spdk/accel_module.h 00:02:37.265 TEST_HEADER include/spdk/assert.h 00:02:37.265 TEST_HEADER include/spdk/base64.h 00:02:37.265 TEST_HEADER include/spdk/barrier.h 00:02:37.265 TEST_HEADER include/spdk/bdev.h 00:02:37.265 TEST_HEADER include/spdk/bdev_module.h 00:02:37.265 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.265 TEST_HEADER include/spdk/bit_array.h 00:02:37.265 TEST_HEADER include/spdk/bit_pool.h 00:02:37.265 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.265 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.265 TEST_HEADER include/spdk/blobfs.h 00:02:37.265 TEST_HEADER include/spdk/blob.h 00:02:37.265 TEST_HEADER include/spdk/conf.h 00:02:37.265 TEST_HEADER include/spdk/cpuset.h 00:02:37.265 TEST_HEADER include/spdk/crc16.h 00:02:37.265 TEST_HEADER include/spdk/crc32.h 00:02:37.265 TEST_HEADER include/spdk/config.h 00:02:37.265 TEST_HEADER include/spdk/crc64.h 00:02:37.265 TEST_HEADER include/spdk/dif.h 00:02:37.265 TEST_HEADER include/spdk/dma.h 00:02:37.265 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.265 TEST_HEADER include/spdk/endian.h 00:02:37.265 TEST_HEADER include/spdk/env.h 00:02:37.265 TEST_HEADER include/spdk/event.h 00:02:37.265 TEST_HEADER include/spdk/fd_group.h 00:02:37.265 TEST_HEADER include/spdk/fsdev.h 00:02:37.265 TEST_HEADER include/spdk/fd.h 00:02:37.265 TEST_HEADER include/spdk/fsdev_module.h 00:02:37.265 TEST_HEADER include/spdk/ftl.h 00:02:37.265 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:37.265 TEST_HEADER include/spdk/file.h 00:02:37.265 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.265 TEST_HEADER include/spdk/hexlify.h 00:02:37.265 TEST_HEADER include/spdk/idxd.h 00:02:37.265 CC app/nvmf_tgt/nvmf_main.o 00:02:37.265 TEST_HEADER include/spdk/histogram_data.h 00:02:37.265 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.265 TEST_HEADER include/spdk/ioat.h 00:02:37.265 CC app/spdk_dd/spdk_dd.o 00:02:37.265 TEST_HEADER include/spdk/init.h 00:02:37.265 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.265 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.265 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.265 TEST_HEADER include/spdk/json.h 00:02:37.265 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.265 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.265 TEST_HEADER include/spdk/keyring.h 00:02:37.265 TEST_HEADER include/spdk/likely.h 00:02:37.265 TEST_HEADER include/spdk/log.h 00:02:37.265 TEST_HEADER include/spdk/keyring_module.h 00:02:37.265 TEST_HEADER include/spdk/lvol.h 00:02:37.265 TEST_HEADER include/spdk/md5.h 00:02:37.265 TEST_HEADER include/spdk/mmio.h 00:02:37.265 TEST_HEADER include/spdk/memory.h 00:02:37.265 TEST_HEADER include/spdk/nbd.h 00:02:37.265 TEST_HEADER include/spdk/net.h 00:02:37.265 TEST_HEADER include/spdk/nvme.h 00:02:37.265 TEST_HEADER include/spdk/notify.h 00:02:37.265 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.265 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.265 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.265 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.265 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.265 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.265 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.265 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.265 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.265 TEST_HEADER include/spdk/opal.h 00:02:37.265 TEST_HEADER include/spdk/nvmf.h 00:02:37.265 TEST_HEADER include/spdk/pci_ids.h 00:02:37.265 TEST_HEADER include/spdk/pipe.h 00:02:37.265 TEST_HEADER include/spdk/opal_spec.h 00:02:37.265 TEST_HEADER include/spdk/queue.h 00:02:37.265 CC app/spdk_tgt/spdk_tgt.o 00:02:37.265 TEST_HEADER include/spdk/rpc.h 00:02:37.265 TEST_HEADER include/spdk/scheduler.h 00:02:37.265 TEST_HEADER include/spdk/reduce.h 00:02:37.265 TEST_HEADER include/spdk/scsi.h 00:02:37.265 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.265 TEST_HEADER include/spdk/sock.h 00:02:37.265 TEST_HEADER include/spdk/stdinc.h 00:02:37.265 TEST_HEADER include/spdk/string.h 00:02:37.265 TEST_HEADER include/spdk/thread.h 00:02:37.265 TEST_HEADER include/spdk/trace.h 00:02:37.265 TEST_HEADER include/spdk/trace_parser.h 00:02:37.265 TEST_HEADER include/spdk/tree.h 00:02:37.265 TEST_HEADER include/spdk/ublk.h 00:02:37.265 TEST_HEADER include/spdk/util.h 00:02:37.265 TEST_HEADER include/spdk/version.h 00:02:37.265 TEST_HEADER include/spdk/uuid.h 00:02:37.265 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.265 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.265 TEST_HEADER include/spdk/vhost.h 00:02:37.265 TEST_HEADER include/spdk/vmd.h 00:02:37.265 TEST_HEADER include/spdk/xor.h 00:02:37.266 TEST_HEADER include/spdk/zipf.h 00:02:37.266 CXX test/cpp_headers/accel.o 00:02:37.266 CXX test/cpp_headers/accel_module.o 00:02:37.266 CXX test/cpp_headers/assert.o 00:02:37.266 CXX test/cpp_headers/bdev.o 00:02:37.266 CXX test/cpp_headers/base64.o 00:02:37.266 CXX test/cpp_headers/barrier.o 00:02:37.266 CXX test/cpp_headers/bdev_module.o 00:02:37.266 CXX test/cpp_headers/bdev_zone.o 00:02:37.266 CXX test/cpp_headers/bit_array.o 00:02:37.266 CXX test/cpp_headers/blob_bdev.o 00:02:37.266 CXX test/cpp_headers/bit_pool.o 00:02:37.266 CXX test/cpp_headers/blobfs.o 00:02:37.266 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.266 CXX test/cpp_headers/blob.o 00:02:37.266 CXX test/cpp_headers/config.o 00:02:37.266 CXX test/cpp_headers/conf.o 00:02:37.266 CXX test/cpp_headers/crc16.o 00:02:37.266 CXX test/cpp_headers/cpuset.o 00:02:37.266 CXX test/cpp_headers/crc64.o 00:02:37.266 CXX test/cpp_headers/dif.o 00:02:37.266 CXX test/cpp_headers/dma.o 00:02:37.266 CXX test/cpp_headers/crc32.o 00:02:37.266 CXX test/cpp_headers/env_dpdk.o 00:02:37.266 CXX test/cpp_headers/endian.o 00:02:37.266 CXX test/cpp_headers/env.o 00:02:37.266 CXX test/cpp_headers/event.o 00:02:37.266 CXX test/cpp_headers/fd_group.o 00:02:37.266 CXX test/cpp_headers/fd.o 00:02:37.266 CXX test/cpp_headers/file.o 00:02:37.266 CXX test/cpp_headers/fsdev.o 00:02:37.266 CXX test/cpp_headers/ftl.o 00:02:37.266 CXX test/cpp_headers/fuse_dispatcher.o 00:02:37.266 CXX test/cpp_headers/fsdev_module.o 00:02:37.266 CXX test/cpp_headers/gpt_spec.o 00:02:37.266 CXX test/cpp_headers/hexlify.o 00:02:37.266 CXX test/cpp_headers/histogram_data.o 00:02:37.266 CXX test/cpp_headers/idxd.o 00:02:37.266 CXX test/cpp_headers/ioat_spec.o 00:02:37.266 CXX test/cpp_headers/idxd_spec.o 00:02:37.266 CXX test/cpp_headers/init.o 00:02:37.266 CXX test/cpp_headers/ioat.o 00:02:37.266 CXX test/cpp_headers/iscsi_spec.o 00:02:37.266 CXX test/cpp_headers/jsonrpc.o 00:02:37.266 CXX test/cpp_headers/json.o 00:02:37.266 CXX test/cpp_headers/likely.o 00:02:37.266 CXX test/cpp_headers/keyring.o 00:02:37.266 CXX test/cpp_headers/keyring_module.o 00:02:37.266 CXX test/cpp_headers/md5.o 00:02:37.266 CXX test/cpp_headers/lvol.o 00:02:37.266 CXX test/cpp_headers/log.o 00:02:37.266 CXX test/cpp_headers/memory.o 00:02:37.266 CXX test/cpp_headers/mmio.o 00:02:37.266 CXX test/cpp_headers/net.o 00:02:37.266 CXX test/cpp_headers/nbd.o 00:02:37.266 CXX test/cpp_headers/notify.o 00:02:37.266 CXX test/cpp_headers/nvme.o 00:02:37.266 CXX test/cpp_headers/nvme_intel.o 00:02:37.266 CC test/thread/poller_perf/poller_perf.o 00:02:37.266 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.266 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.266 CXX test/cpp_headers/nvme_spec.o 00:02:37.266 CXX test/cpp_headers/nvme_zns.o 00:02:37.266 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.266 CXX test/cpp_headers/nvmf.o 00:02:37.266 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.266 CXX test/cpp_headers/nvmf_transport.o 00:02:37.266 CXX test/cpp_headers/nvmf_spec.o 00:02:37.266 CXX test/cpp_headers/opal.o 00:02:37.266 CC examples/util/zipf/zipf.o 00:02:37.266 CC examples/ioat/verify/verify.o 00:02:37.266 CC examples/ioat/perf/perf.o 00:02:37.266 CC app/fio/nvme/fio_plugin.o 00:02:37.266 CC test/env/memory/memory_ut.o 00:02:37.266 CC test/app/stub/stub.o 00:02:37.266 CC test/app/histogram_perf/histogram_perf.o 00:02:37.266 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.266 CC test/env/vtophys/vtophys.o 00:02:37.266 CC test/app/jsoncat/jsoncat.o 00:02:37.266 CC test/env/pci/pci_ut.o 00:02:37.266 CC test/app/bdev_svc/bdev_svc.o 00:02:37.266 CC test/dma/test_dma/test_dma.o 00:02:37.541 LINK spdk_lspci 00:02:37.541 CC app/fio/bdev/fio_plugin.o 00:02:37.541 LINK interrupt_tgt 00:02:37.807 LINK spdk_trace_record 00:02:37.807 LINK rpc_client_test 00:02:37.807 LINK iscsi_tgt 00:02:37.807 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.807 LINK spdk_nvme_discover 00:02:37.807 LINK nvmf_tgt 00:02:37.807 LINK histogram_perf 00:02:37.807 LINK zipf 00:02:37.807 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.807 CC test/env/mem_callbacks/mem_callbacks.o 00:02:37.807 LINK jsoncat 00:02:37.807 LINK spdk_tgt 00:02:37.807 CXX test/cpp_headers/opal_spec.o 00:02:37.807 CXX test/cpp_headers/pci_ids.o 00:02:37.807 CXX test/cpp_headers/queue.o 00:02:37.807 CXX test/cpp_headers/reduce.o 00:02:37.807 CXX test/cpp_headers/pipe.o 00:02:37.807 CXX test/cpp_headers/rpc.o 00:02:37.807 CXX test/cpp_headers/scheduler.o 00:02:37.807 CXX test/cpp_headers/scsi.o 00:02:37.807 CXX test/cpp_headers/sock.o 00:02:37.807 CXX test/cpp_headers/stdinc.o 00:02:37.807 CXX test/cpp_headers/scsi_spec.o 00:02:37.807 CXX test/cpp_headers/string.o 00:02:37.807 CXX test/cpp_headers/thread.o 00:02:37.807 LINK stub 00:02:37.807 CXX test/cpp_headers/trace.o 00:02:37.807 CXX test/cpp_headers/trace_parser.o 00:02:37.807 CXX test/cpp_headers/ublk.o 00:02:37.807 CXX test/cpp_headers/tree.o 00:02:37.807 CXX test/cpp_headers/util.o 00:02:37.807 CXX test/cpp_headers/uuid.o 00:02:37.807 LINK poller_perf 00:02:37.807 CXX test/cpp_headers/version.o 00:02:37.807 CXX test/cpp_headers/vfio_user_pci.o 00:02:37.807 CXX test/cpp_headers/vfio_user_spec.o 00:02:37.807 CXX test/cpp_headers/vhost.o 00:02:37.807 CXX test/cpp_headers/vmd.o 00:02:37.807 CXX test/cpp_headers/xor.o 00:02:37.807 CXX test/cpp_headers/zipf.o 00:02:37.807 LINK bdev_svc 00:02:38.066 LINK vtophys 00:02:38.066 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:38.066 LINK env_dpdk_post_init 00:02:38.066 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.066 LINK verify 00:02:38.066 LINK ioat_perf 00:02:38.066 LINK spdk_dd 00:02:38.066 LINK spdk_trace 00:02:38.323 LINK pci_ut 00:02:38.323 LINK test_dma 00:02:38.323 LINK spdk_nvme 00:02:38.323 CC examples/idxd/perf/perf.o 00:02:38.323 LINK spdk_nvme_identify 00:02:38.323 CC examples/vmd/led/led.o 00:02:38.323 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.323 CC examples/sock/hello_world/hello_sock.o 00:02:38.323 CC examples/thread/thread/thread_ex.o 00:02:38.323 LINK spdk_top 00:02:38.323 LINK spdk_bdev 00:02:38.323 CC test/event/reactor/reactor.o 00:02:38.323 CC test/event/event_perf/event_perf.o 00:02:38.323 CC test/event/reactor_perf/reactor_perf.o 00:02:38.323 LINK spdk_nvme_perf 00:02:38.582 LINK nvme_fuzz 00:02:38.582 CC test/event/app_repeat/app_repeat.o 00:02:38.582 LINK vhost_fuzz 00:02:38.582 CC test/event/scheduler/scheduler.o 00:02:38.582 LINK led 00:02:38.582 LINK lsvmd 00:02:38.582 CC app/vhost/vhost.o 00:02:38.582 LINK reactor 00:02:38.582 LINK mem_callbacks 00:02:38.582 LINK event_perf 00:02:38.582 LINK reactor_perf 00:02:38.582 LINK idxd_perf 00:02:38.582 LINK hello_sock 00:02:38.582 LINK app_repeat 00:02:38.582 LINK thread 00:02:38.582 LINK scheduler 00:02:38.840 LINK vhost 00:02:38.840 CC test/nvme/startup/startup.o 00:02:38.840 CC test/nvme/err_injection/err_injection.o 00:02:38.840 CC test/nvme/sgl/sgl.o 00:02:38.840 CC test/nvme/reset/reset.o 00:02:38.840 CC test/nvme/e2edp/nvme_dp.o 00:02:38.840 CC test/nvme/boot_partition/boot_partition.o 00:02:38.840 CC test/nvme/reserve/reserve.o 00:02:38.840 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.840 CC test/nvme/fdp/fdp.o 00:02:38.840 CC test/nvme/overhead/overhead.o 00:02:38.840 CC test/nvme/connect_stress/connect_stress.o 00:02:38.840 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.840 CC test/nvme/compliance/nvme_compliance.o 00:02:38.840 CC test/nvme/simple_copy/simple_copy.o 00:02:38.840 CC test/nvme/aer/aer.o 00:02:38.840 CC test/nvme/cuse/cuse.o 00:02:38.840 CC test/blobfs/mkfs/mkfs.o 00:02:38.840 CC test/accel/dif/dif.o 00:02:38.840 LINK memory_ut 00:02:38.840 CC test/lvol/esnap/esnap.o 00:02:38.840 LINK startup 00:02:39.097 LINK boot_partition 00:02:39.097 LINK connect_stress 00:02:39.097 LINK err_injection 00:02:39.097 LINK reserve 00:02:39.097 LINK fused_ordering 00:02:39.097 LINK doorbell_aers 00:02:39.097 LINK mkfs 00:02:39.097 LINK simple_copy 00:02:39.097 LINK sgl 00:02:39.097 LINK reset 00:02:39.097 LINK nvme_dp 00:02:39.097 CC examples/nvme/hotplug/hotplug.o 00:02:39.097 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.097 CC examples/nvme/abort/abort.o 00:02:39.097 CC examples/nvme/arbitration/arbitration.o 00:02:39.097 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.097 CC examples/nvme/reconnect/reconnect.o 00:02:39.097 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.097 CC examples/nvme/hello_world/hello_world.o 00:02:39.097 LINK overhead 00:02:39.097 LINK aer 00:02:39.097 LINK fdp 00:02:39.097 LINK nvme_compliance 00:02:39.097 CC examples/accel/perf/accel_perf.o 00:02:39.097 CC examples/blob/hello_world/hello_blob.o 00:02:39.097 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:39.097 CC examples/blob/cli/blobcli.o 00:02:39.354 LINK pmr_persistence 00:02:39.354 LINK cmb_copy 00:02:39.354 LINK hello_world 00:02:39.354 LINK hotplug 00:02:39.354 LINK reconnect 00:02:39.354 LINK arbitration 00:02:39.354 LINK abort 00:02:39.354 LINK hello_blob 00:02:39.354 LINK iscsi_fuzz 00:02:39.354 LINK hello_fsdev 00:02:39.354 LINK dif 00:02:39.612 LINK nvme_manage 00:02:39.612 LINK accel_perf 00:02:39.612 LINK blobcli 00:02:39.870 LINK cuse 00:02:39.870 CC test/bdev/bdevio/bdevio.o 00:02:40.128 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.128 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.390 LINK hello_bdev 00:02:40.390 LINK bdevio 00:02:40.691 LINK bdevperf 00:02:41.332 CC examples/nvmf/nvmf/nvmf.o 00:02:41.332 LINK nvmf 00:02:42.711 LINK esnap 00:02:42.711 00:02:42.711 real 0m55.092s 00:02:42.711 user 8m15.579s 00:02:42.711 sys 3m37.548s 00:02:42.711 12:43:02 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:42.711 12:43:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:42.711 ************************************ 00:02:42.711 END TEST make 00:02:42.711 ************************************ 00:02:42.711 12:43:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:42.711 12:43:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:42.711 12:43:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:42.711 12:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.711 12:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:42.711 12:43:02 -- pm/common@44 -- $ pid=942571 00:02:42.711 12:43:02 -- pm/common@50 -- $ kill -TERM 942571 00:02:42.711 12:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.711 12:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:42.711 12:43:02 -- pm/common@44 -- $ pid=942573 00:02:42.711 12:43:02 -- pm/common@50 -- $ kill -TERM 942573 00:02:42.711 12:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.711 12:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:42.711 12:43:02 -- pm/common@44 -- $ pid=942575 00:02:42.711 12:43:02 -- pm/common@50 -- $ kill -TERM 942575 00:02:42.711 12:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.711 12:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:42.711 12:43:02 -- pm/common@44 -- $ pid=942600 00:02:42.711 12:43:02 -- pm/common@50 -- $ sudo -E kill -TERM 942600 00:02:42.971 12:43:03 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:42.971 12:43:03 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:42.971 12:43:03 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:42.971 12:43:03 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:42.971 12:43:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:42.971 12:43:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:42.971 12:43:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:42.971 12:43:03 -- scripts/common.sh@336 -- # IFS=.-: 00:02:42.971 12:43:03 -- scripts/common.sh@336 -- # read -ra ver1 00:02:42.971 12:43:03 -- scripts/common.sh@337 -- # IFS=.-: 00:02:42.971 12:43:03 -- scripts/common.sh@337 -- # read -ra ver2 00:02:42.971 12:43:03 -- scripts/common.sh@338 -- # local 'op=<' 00:02:42.971 12:43:03 -- scripts/common.sh@340 -- # ver1_l=2 00:02:42.971 12:43:03 -- scripts/common.sh@341 -- # ver2_l=1 00:02:42.971 12:43:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:42.971 12:43:03 -- scripts/common.sh@344 -- # case "$op" in 00:02:42.971 12:43:03 -- scripts/common.sh@345 -- # : 1 00:02:42.971 12:43:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:42.971 12:43:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:42.971 12:43:03 -- scripts/common.sh@365 -- # decimal 1 00:02:42.971 12:43:03 -- scripts/common.sh@353 -- # local d=1 00:02:42.971 12:43:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:42.971 12:43:03 -- scripts/common.sh@355 -- # echo 1 00:02:42.971 12:43:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:42.971 12:43:03 -- scripts/common.sh@366 -- # decimal 2 00:02:42.971 12:43:03 -- scripts/common.sh@353 -- # local d=2 00:02:42.971 12:43:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:42.971 12:43:03 -- scripts/common.sh@355 -- # echo 2 00:02:42.971 12:43:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:42.971 12:43:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:42.971 12:43:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:42.971 12:43:03 -- scripts/common.sh@368 -- # return 0 00:02:42.971 12:43:03 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:42.971 12:43:03 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.971 --rc genhtml_branch_coverage=1 00:02:42.971 --rc genhtml_function_coverage=1 00:02:42.971 --rc genhtml_legend=1 00:02:42.971 --rc geninfo_all_blocks=1 00:02:42.971 --rc geninfo_unexecuted_blocks=1 00:02:42.971 00:02:42.971 ' 00:02:42.971 12:43:03 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.971 --rc genhtml_branch_coverage=1 00:02:42.971 --rc genhtml_function_coverage=1 00:02:42.971 --rc genhtml_legend=1 00:02:42.971 --rc geninfo_all_blocks=1 00:02:42.971 --rc geninfo_unexecuted_blocks=1 00:02:42.971 00:02:42.971 ' 00:02:42.971 12:43:03 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.971 --rc genhtml_branch_coverage=1 00:02:42.971 --rc genhtml_function_coverage=1 00:02:42.971 --rc genhtml_legend=1 00:02:42.971 --rc geninfo_all_blocks=1 00:02:42.971 --rc geninfo_unexecuted_blocks=1 00:02:42.971 00:02:42.971 ' 00:02:42.971 12:43:03 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.971 --rc genhtml_branch_coverage=1 00:02:42.971 --rc genhtml_function_coverage=1 00:02:42.971 --rc genhtml_legend=1 00:02:42.971 --rc geninfo_all_blocks=1 00:02:42.971 --rc geninfo_unexecuted_blocks=1 00:02:42.971 00:02:42.971 ' 00:02:42.971 12:43:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.971 12:43:03 -- nvmf/common.sh@7 -- # uname -s 00:02:42.971 12:43:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.971 12:43:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.971 12:43:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.971 12:43:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.971 12:43:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.971 12:43:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.971 12:43:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.971 12:43:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.971 12:43:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.971 12:43:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.971 12:43:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:42.971 12:43:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:42.971 12:43:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.971 12:43:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.971 12:43:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:42.971 12:43:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:42.971 12:43:03 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:42.971 12:43:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:42.971 12:43:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.971 12:43:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.971 12:43:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.971 12:43:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.971 12:43:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.971 12:43:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.971 12:43:03 -- paths/export.sh@5 -- # export PATH 00:02:42.971 12:43:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.971 12:43:03 -- nvmf/common.sh@51 -- # : 0 00:02:42.971 12:43:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:42.971 12:43:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:42.971 12:43:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:42.971 12:43:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.971 12:43:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.971 12:43:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:42.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:42.971 12:43:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:42.971 12:43:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:42.971 12:43:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:42.971 12:43:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.971 12:43:03 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.971 12:43:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.971 12:43:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.971 12:43:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.971 12:43:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.971 12:43:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.971 12:43:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.971 12:43:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.971 12:43:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.971 12:43:03 -- spdk/autotest.sh@48 -- # udevadm_pid=1005348 00:02:42.971 12:43:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:42.971 12:43:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.971 12:43:03 -- pm/common@17 -- # local monitor 00:02:42.971 12:43:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.971 12:43:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.971 12:43:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.971 12:43:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.971 12:43:03 -- pm/common@21 -- # date +%s 00:02:42.971 12:43:03 -- pm/common@21 -- # date +%s 00:02:42.971 12:43:03 -- pm/common@25 -- # sleep 1 00:02:42.971 12:43:03 -- pm/common@21 -- # date +%s 00:02:42.972 12:43:03 -- pm/common@21 -- # date +%s 00:02:42.972 12:43:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728988983 00:02:42.972 12:43:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728988983 00:02:42.972 12:43:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728988983 00:02:42.972 12:43:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728988983 00:02:42.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728988983_collect-vmstat.pm.log 00:02:42.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728988983_collect-cpu-load.pm.log 00:02:42.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728988983_collect-cpu-temp.pm.log 00:02:42.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728988983_collect-bmc-pm.bmc.pm.log 00:02:43.910 12:43:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.910 12:43:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:43.910 12:43:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:43.910 12:43:04 -- common/autotest_common.sh@10 -- # set +x 00:02:43.910 12:43:04 -- spdk/autotest.sh@59 -- # create_test_list 00:02:43.910 12:43:04 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:43.910 12:43:04 -- common/autotest_common.sh@10 -- # set +x 00:02:44.169 12:43:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:44.169 12:43:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.169 12:43:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.169 12:43:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:44.169 12:43:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.169 12:43:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:44.169 12:43:04 -- common/autotest_common.sh@1455 -- # uname 00:02:44.169 12:43:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:44.169 12:43:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:44.169 12:43:04 -- common/autotest_common.sh@1475 -- # uname 00:02:44.169 12:43:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:44.169 12:43:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:44.169 12:43:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:44.169 lcov: LCOV version 1.15 00:02:44.169 12:43:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:02.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:02.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.887 12:43:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:08.887 12:43:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:08.887 12:43:28 -- common/autotest_common.sh@10 -- # set +x 00:03:08.887 12:43:28 -- spdk/autotest.sh@78 -- # rm -f 00:03:08.887 12:43:28 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.421 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:11.421 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.421 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.421 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.421 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.680 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.680 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.680 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.681 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.940 12:43:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:11.940 12:43:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:11.940 12:43:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:11.940 12:43:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:11.940 12:43:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:11.940 12:43:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:11.940 12:43:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:11.940 12:43:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.940 12:43:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:11.940 12:43:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:11.940 12:43:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.940 12:43:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:11.940 12:43:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:11.940 12:43:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:11.940 12:43:32 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.940 No valid GPT data, bailing 00:03:11.940 12:43:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.940 12:43:32 -- scripts/common.sh@394 -- # pt= 00:03:11.940 12:43:32 -- scripts/common.sh@395 -- # return 1 00:03:11.940 12:43:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.940 1+0 records in 00:03:11.940 1+0 records out 00:03:11.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552897 s, 190 MB/s 00:03:11.940 12:43:32 -- spdk/autotest.sh@105 -- # sync 00:03:11.940 12:43:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.940 12:43:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.940 12:43:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.513 12:43:37 -- spdk/autotest.sh@111 -- # uname -s 00:03:18.513 12:43:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:18.513 12:43:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:18.513 12:43:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:20.420 Hugepages 00:03:20.420 node hugesize free / total 00:03:20.420 node0 1048576kB 0 / 0 00:03:20.420 node0 2048kB 0 / 0 00:03:20.420 node1 1048576kB 0 / 0 00:03:20.420 node1 2048kB 0 / 0 00:03:20.420 00:03:20.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:20.420 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:20.420 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:20.420 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:20.420 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:20.420 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:20.420 12:43:40 -- spdk/autotest.sh@117 -- # uname -s 00:03:20.420 12:43:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:20.420 12:43:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:20.420 12:43:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.711 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.711 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:24.649 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.908 12:43:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:25.847 12:43:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:25.847 12:43:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:25.847 12:43:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:25.847 12:43:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:25.847 12:43:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:25.847 12:43:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:25.847 12:43:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.847 12:43:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.847 12:43:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:25.847 12:43:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:25.847 12:43:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:25.847 12:43:46 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.147 Waiting for block devices as requested 00:03:29.147 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:29.147 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:29.147 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:29.147 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:29.147 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:29.147 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:29.147 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:29.407 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:29.407 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:29.407 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:29.666 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:29.666 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:29.666 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:29.925 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:29.925 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:29.925 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:29.925 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:30.185 12:43:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:30.185 12:43:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:30.185 12:43:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:30.185 12:43:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:30.185 12:43:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:30.185 12:43:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:30.185 12:43:50 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:30.185 12:43:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:30.185 12:43:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:30.185 12:43:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:30.185 12:43:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:30.185 12:43:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:30.185 12:43:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:30.185 12:43:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:30.185 12:43:50 -- common/autotest_common.sh@1541 -- # continue 00:03:30.185 12:43:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:30.185 12:43:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:30.185 12:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:30.185 12:43:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:30.185 12:43:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.185 12:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:30.185 12:43:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.475 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.475 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.854 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.854 12:43:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.854 12:43:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.854 12:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:34.854 12:43:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.854 12:43:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:34.854 12:43:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.854 12:43:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:34.854 12:43:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:34.854 12:43:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:34.854 12:43:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.854 12:43:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:34.854 12:43:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:34.854 12:43:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:34.854 12:43:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.854 12:43:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.854 12:43:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:34.854 12:43:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:34.854 12:43:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:34.854 12:43:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:34.854 12:43:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:34.854 12:43:55 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:34.854 12:43:55 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:34.854 12:43:55 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:34.854 12:43:55 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:34.854 12:43:55 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:34.854 12:43:55 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:34.854 12:43:55 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1019569 00:03:34.854 12:43:55 -- common/autotest_common.sh@1583 -- # waitforlisten 1019569 00:03:34.854 12:43:55 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.854 12:43:55 -- common/autotest_common.sh@831 -- # '[' -z 1019569 ']' 00:03:34.854 12:43:55 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.854 12:43:55 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:34.854 12:43:55 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.854 12:43:55 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:34.854 12:43:55 -- common/autotest_common.sh@10 -- # set +x 00:03:34.854 [2024-10-15 12:43:55.104854] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:03:34.854 [2024-10-15 12:43:55.104901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019569 ] 00:03:34.854 [2024-10-15 12:43:55.174344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.112 [2024-10-15 12:43:55.216759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.112 12:43:55 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:35.112 12:43:55 -- common/autotest_common.sh@864 -- # return 0 00:03:35.112 12:43:55 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:35.112 12:43:55 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:35.371 12:43:55 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:38.657 nvme0n1 00:03:38.657 12:43:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:38.657 [2024-10-15 12:43:58.607260] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:38.657 request: 00:03:38.657 { 00:03:38.657 "nvme_ctrlr_name": "nvme0", 00:03:38.657 "password": "test", 00:03:38.657 "method": "bdev_nvme_opal_revert", 00:03:38.657 "req_id": 1 00:03:38.657 } 00:03:38.657 Got JSON-RPC error response 00:03:38.657 response: 00:03:38.657 { 00:03:38.657 "code": -32602, 00:03:38.657 "message": "Invalid parameters" 00:03:38.657 } 00:03:38.657 12:43:58 -- common/autotest_common.sh@1589 -- # true 00:03:38.657 12:43:58 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:38.657 12:43:58 -- common/autotest_common.sh@1593 -- # killprocess 1019569 00:03:38.657 12:43:58 -- common/autotest_common.sh@950 -- # '[' -z 1019569 ']' 00:03:38.657 12:43:58 -- common/autotest_common.sh@954 -- # kill -0 1019569 00:03:38.657 12:43:58 -- common/autotest_common.sh@955 -- # uname 00:03:38.657 12:43:58 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:38.657 12:43:58 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019569 00:03:38.657 12:43:58 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:38.657 12:43:58 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:38.657 12:43:58 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019569' 00:03:38.657 killing process with pid 1019569 00:03:38.657 12:43:58 -- common/autotest_common.sh@969 -- # kill 1019569 00:03:38.657 12:43:58 -- common/autotest_common.sh@974 -- # wait 1019569 00:03:40.559 12:44:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.559 12:44:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.559 12:44:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.559 12:44:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.559 12:44:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.559 12:44:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.559 12:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:40.559 12:44:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.559 12:44:00 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.559 12:44:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.559 12:44:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.559 12:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:40.559 ************************************ 00:03:40.559 START TEST env 00:03:40.559 ************************************ 00:03:40.819 12:44:00 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.819 * Looking for test storage... 00:03:40.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:40.819 12:44:00 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:40.819 12:44:00 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:40.819 12:44:00 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:40.819 12:44:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.819 12:44:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.819 12:44:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.819 12:44:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.819 12:44:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.819 12:44:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.819 12:44:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.819 12:44:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.819 12:44:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.819 12:44:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.819 12:44:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.819 12:44:01 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.819 12:44:01 env -- scripts/common.sh@345 -- # : 1 00:03:40.819 12:44:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.819 12:44:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.819 12:44:01 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.819 12:44:01 env -- scripts/common.sh@353 -- # local d=1 00:03:40.819 12:44:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.819 12:44:01 env -- scripts/common.sh@355 -- # echo 1 00:03:40.819 12:44:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.819 12:44:01 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.819 12:44:01 env -- scripts/common.sh@353 -- # local d=2 00:03:40.819 12:44:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.819 12:44:01 env -- scripts/common.sh@355 -- # echo 2 00:03:40.819 12:44:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.819 12:44:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.819 12:44:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.819 12:44:01 env -- scripts/common.sh@368 -- # return 0 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:40.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.819 --rc genhtml_branch_coverage=1 00:03:40.819 --rc genhtml_function_coverage=1 00:03:40.819 --rc genhtml_legend=1 00:03:40.819 --rc geninfo_all_blocks=1 00:03:40.819 --rc geninfo_unexecuted_blocks=1 00:03:40.819 00:03:40.819 ' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:40.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.819 --rc genhtml_branch_coverage=1 00:03:40.819 --rc genhtml_function_coverage=1 00:03:40.819 --rc genhtml_legend=1 00:03:40.819 --rc geninfo_all_blocks=1 00:03:40.819 --rc geninfo_unexecuted_blocks=1 00:03:40.819 00:03:40.819 ' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:40.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.819 --rc genhtml_branch_coverage=1 00:03:40.819 --rc genhtml_function_coverage=1 00:03:40.819 --rc genhtml_legend=1 00:03:40.819 --rc geninfo_all_blocks=1 00:03:40.819 --rc geninfo_unexecuted_blocks=1 00:03:40.819 00:03:40.819 ' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:40.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.819 --rc genhtml_branch_coverage=1 00:03:40.819 --rc genhtml_function_coverage=1 00:03:40.819 --rc genhtml_legend=1 00:03:40.819 --rc geninfo_all_blocks=1 00:03:40.819 --rc geninfo_unexecuted_blocks=1 00:03:40.819 00:03:40.819 ' 00:03:40.819 12:44:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.819 12:44:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.819 12:44:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.819 ************************************ 00:03:40.819 START TEST env_memory 00:03:40.819 ************************************ 00:03:40.819 12:44:01 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.819 00:03:40.819 00:03:40.819 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.819 http://cunit.sourceforge.net/ 00:03:40.819 00:03:40.819 00:03:40.819 Suite: memory 00:03:40.819 Test: alloc and free memory map ...[2024-10-15 12:44:01.127339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.819 passed 00:03:41.079 Test: mem map translation ...[2024-10-15 12:44:01.145860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:41.079 [2024-10-15 12:44:01.145875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:41.079 [2024-10-15 12:44:01.145910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:41.079 [2024-10-15 12:44:01.145917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:41.079 passed 00:03:41.079 Test: mem map registration ...[2024-10-15 12:44:01.181542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:41.079 [2024-10-15 12:44:01.181555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:41.079 passed 00:03:41.079 Test: mem map adjacent registrations ...passed 00:03:41.079 00:03:41.079 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.079 suites 1 1 n/a 0 0 00:03:41.079 tests 4 4 4 0 0 00:03:41.079 asserts 152 152 152 0 n/a 00:03:41.079 00:03:41.079 Elapsed time = 0.131 seconds 00:03:41.079 00:03:41.079 real 0m0.144s 00:03:41.079 user 0m0.138s 00:03:41.079 sys 0m0.006s 00:03:41.079 12:44:01 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.079 12:44:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:41.079 ************************************ 00:03:41.079 END TEST env_memory 00:03:41.079 ************************************ 00:03:41.079 12:44:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.079 12:44:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.079 12:44:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.079 12:44:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.079 ************************************ 00:03:41.079 START TEST env_vtophys 00:03:41.079 ************************************ 00:03:41.079 12:44:01 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.079 EAL: lib.eal log level changed from notice to debug 00:03:41.079 EAL: Detected lcore 0 as core 0 on socket 0 00:03:41.079 EAL: Detected lcore 1 as core 1 on socket 0 00:03:41.079 EAL: Detected lcore 2 as core 2 on socket 0 00:03:41.079 EAL: Detected lcore 3 as core 3 on socket 0 00:03:41.079 EAL: Detected lcore 4 as core 4 on socket 0 00:03:41.079 EAL: Detected lcore 5 as core 5 on socket 0 00:03:41.079 EAL: Detected lcore 6 as core 6 on socket 0 00:03:41.079 EAL: Detected lcore 7 as core 8 on socket 0 00:03:41.079 EAL: Detected lcore 8 as core 9 on socket 0 00:03:41.079 EAL: Detected lcore 9 as core 10 on socket 0 00:03:41.079 EAL: Detected lcore 10 as core 11 on socket 0 00:03:41.079 EAL: Detected lcore 11 as core 12 on socket 0 00:03:41.079 EAL: Detected lcore 12 as core 13 on socket 0 00:03:41.079 EAL: Detected lcore 13 as core 16 on socket 0 00:03:41.079 EAL: Detected lcore 14 as core 17 on socket 0 00:03:41.079 EAL: Detected lcore 15 as core 18 on socket 0 00:03:41.080 EAL: Detected lcore 16 as core 19 on socket 0 00:03:41.080 EAL: Detected lcore 17 as core 20 on socket 0 00:03:41.080 EAL: Detected lcore 18 as core 21 on socket 0 00:03:41.080 EAL: Detected lcore 19 as core 25 on socket 0 00:03:41.080 EAL: Detected lcore 20 as core 26 on socket 0 00:03:41.080 EAL: Detected lcore 21 as core 27 on socket 0 00:03:41.080 EAL: Detected lcore 22 as core 28 on socket 0 00:03:41.080 EAL: Detected lcore 23 as core 29 on socket 0 00:03:41.080 EAL: Detected lcore 24 as core 0 on socket 1 00:03:41.080 EAL: Detected lcore 25 as core 1 on socket 1 00:03:41.080 EAL: Detected lcore 26 as core 2 on socket 1 00:03:41.080 EAL: Detected lcore 27 as core 3 on socket 1 00:03:41.080 EAL: Detected lcore 28 as core 4 on socket 1 00:03:41.080 EAL: Detected lcore 29 as core 5 on socket 1 00:03:41.080 EAL: Detected lcore 30 as core 6 on socket 1 00:03:41.080 EAL: Detected lcore 31 as core 8 on socket 1 00:03:41.080 EAL: Detected lcore 32 as core 10 on socket 1 00:03:41.080 EAL: Detected lcore 33 as core 11 on socket 1 00:03:41.080 EAL: Detected lcore 34 as core 12 on socket 1 00:03:41.080 EAL: Detected lcore 35 as core 13 on socket 1 00:03:41.080 EAL: Detected lcore 36 as core 16 on socket 1 00:03:41.080 EAL: Detected lcore 37 as core 17 on socket 1 00:03:41.080 EAL: Detected lcore 38 as core 18 on socket 1 00:03:41.080 EAL: Detected lcore 39 as core 19 on socket 1 00:03:41.080 EAL: Detected lcore 40 as core 20 on socket 1 00:03:41.080 EAL: Detected lcore 41 as core 21 on socket 1 00:03:41.080 EAL: Detected lcore 42 as core 24 on socket 1 00:03:41.080 EAL: Detected lcore 43 as core 25 on socket 1 00:03:41.080 EAL: Detected lcore 44 as core 26 on socket 1 00:03:41.080 EAL: Detected lcore 45 as core 27 on socket 1 00:03:41.080 EAL: Detected lcore 46 as core 28 on socket 1 00:03:41.080 EAL: Detected lcore 47 as core 29 on socket 1 00:03:41.080 EAL: Detected lcore 48 as core 0 on socket 0 00:03:41.080 EAL: Detected lcore 49 as core 1 on socket 0 00:03:41.080 EAL: Detected lcore 50 as core 2 on socket 0 00:03:41.080 EAL: Detected lcore 51 as core 3 on socket 0 00:03:41.080 EAL: Detected lcore 52 as core 4 on socket 0 00:03:41.080 EAL: Detected lcore 53 as core 5 on socket 0 00:03:41.080 EAL: Detected lcore 54 as core 6 on socket 0 00:03:41.080 EAL: Detected lcore 55 as core 8 on socket 0 00:03:41.080 EAL: Detected lcore 56 as core 9 on socket 0 00:03:41.080 EAL: Detected lcore 57 as core 10 on socket 0 00:03:41.080 EAL: Detected lcore 58 as core 11 on socket 0 00:03:41.080 EAL: Detected lcore 59 as core 12 on socket 0 00:03:41.080 EAL: Detected lcore 60 as core 13 on socket 0 00:03:41.080 EAL: Detected lcore 61 as core 16 on socket 0 00:03:41.080 EAL: Detected lcore 62 as core 17 on socket 0 00:03:41.080 EAL: Detected lcore 63 as core 18 on socket 0 00:03:41.080 EAL: Detected lcore 64 as core 19 on socket 0 00:03:41.080 EAL: Detected lcore 65 as core 20 on socket 0 00:03:41.080 EAL: Detected lcore 66 as core 21 on socket 0 00:03:41.080 EAL: Detected lcore 67 as core 25 on socket 0 00:03:41.080 EAL: Detected lcore 68 as core 26 on socket 0 00:03:41.080 EAL: Detected lcore 69 as core 27 on socket 0 00:03:41.080 EAL: Detected lcore 70 as core 28 on socket 0 00:03:41.080 EAL: Detected lcore 71 as core 29 on socket 0 00:03:41.080 EAL: Detected lcore 72 as core 0 on socket 1 00:03:41.080 EAL: Detected lcore 73 as core 1 on socket 1 00:03:41.080 EAL: Detected lcore 74 as core 2 on socket 1 00:03:41.080 EAL: Detected lcore 75 as core 3 on socket 1 00:03:41.080 EAL: Detected lcore 76 as core 4 on socket 1 00:03:41.080 EAL: Detected lcore 77 as core 5 on socket 1 00:03:41.080 EAL: Detected lcore 78 as core 6 on socket 1 00:03:41.080 EAL: Detected lcore 79 as core 8 on socket 1 00:03:41.080 EAL: Detected lcore 80 as core 10 on socket 1 00:03:41.080 EAL: Detected lcore 81 as core 11 on socket 1 00:03:41.080 EAL: Detected lcore 82 as core 12 on socket 1 00:03:41.080 EAL: Detected lcore 83 as core 13 on socket 1 00:03:41.080 EAL: Detected lcore 84 as core 16 on socket 1 00:03:41.080 EAL: Detected lcore 85 as core 17 on socket 1 00:03:41.080 EAL: Detected lcore 86 as core 18 on socket 1 00:03:41.080 EAL: Detected lcore 87 as core 19 on socket 1 00:03:41.080 EAL: Detected lcore 88 as core 20 on socket 1 00:03:41.080 EAL: Detected lcore 89 as core 21 on socket 1 00:03:41.080 EAL: Detected lcore 90 as core 24 on socket 1 00:03:41.080 EAL: Detected lcore 91 as core 25 on socket 1 00:03:41.080 EAL: Detected lcore 92 as core 26 on socket 1 00:03:41.080 EAL: Detected lcore 93 as core 27 on socket 1 00:03:41.080 EAL: Detected lcore 94 as core 28 on socket 1 00:03:41.080 EAL: Detected lcore 95 as core 29 on socket 1 00:03:41.080 EAL: Maximum logical cores by configuration: 128 00:03:41.080 EAL: Detected CPU lcores: 96 00:03:41.080 EAL: Detected NUMA nodes: 2 00:03:41.080 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:41.080 EAL: Detected shared linkage of DPDK 00:03:41.080 EAL: No shared files mode enabled, IPC will be disabled 00:03:41.080 EAL: Bus pci wants IOVA as 'DC' 00:03:41.080 EAL: Buses did not request a specific IOVA mode. 00:03:41.080 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:41.080 EAL: Selected IOVA mode 'VA' 00:03:41.080 EAL: Probing VFIO support... 00:03:41.080 EAL: IOMMU type 1 (Type 1) is supported 00:03:41.080 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:41.080 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:41.080 EAL: VFIO support initialized 00:03:41.080 EAL: Ask a virtual area of 0x2e000 bytes 00:03:41.080 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:41.080 EAL: Setting up physically contiguous memory... 00:03:41.080 EAL: Setting maximum number of open files to 524288 00:03:41.080 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:41.080 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:41.080 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:41.080 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:41.080 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.080 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:41.080 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.080 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.080 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:41.080 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:41.080 EAL: Hugepages will be freed exactly as allocated. 00:03:41.080 EAL: No shared files mode enabled, IPC is disabled 00:03:41.080 EAL: No shared files mode enabled, IPC is disabled 00:03:41.080 EAL: TSC frequency is ~2100000 KHz 00:03:41.080 EAL: Main lcore 0 is ready (tid=7f8c0deaea00;cpuset=[0]) 00:03:41.080 EAL: Trying to obtain current memory policy. 00:03:41.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.080 EAL: Restoring previous memory policy: 0 00:03:41.080 EAL: request: mp_malloc_sync 00:03:41.080 EAL: No shared files mode enabled, IPC is disabled 00:03:41.080 EAL: Heap on socket 0 was expanded by 2MB 00:03:41.080 EAL: No shared files mode enabled, IPC is disabled 00:03:41.080 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:41.080 EAL: Mem event callback 'spdk:(nil)' registered 00:03:41.080 00:03:41.080 00:03:41.080 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.080 http://cunit.sourceforge.net/ 00:03:41.080 00:03:41.080 00:03:41.080 Suite: components_suite 00:03:41.080 Test: vtophys_malloc_test ...passed 00:03:41.080 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.080 EAL: Restoring previous memory policy: 4 00:03:41.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.080 EAL: request: mp_malloc_sync 00:03:41.080 EAL: No shared files mode enabled, IPC is disabled 00:03:41.080 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.081 EAL: Trying to obtain current memory policy. 00:03:41.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.081 EAL: Restoring previous memory policy: 4 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.081 EAL: Trying to obtain current memory policy. 00:03:41.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.081 EAL: Restoring previous memory policy: 4 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.081 EAL: Trying to obtain current memory policy. 00:03:41.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.081 EAL: Restoring previous memory policy: 4 00:03:41.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.081 EAL: request: mp_malloc_sync 00:03:41.081 EAL: No shared files mode enabled, IPC is disabled 00:03:41.081 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.340 EAL: Trying to obtain current memory policy. 00:03:41.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.340 EAL: Restoring previous memory policy: 4 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.340 EAL: Trying to obtain current memory policy. 00:03:41.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.340 EAL: Restoring previous memory policy: 4 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.340 EAL: Trying to obtain current memory policy. 00:03:41.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.340 EAL: Restoring previous memory policy: 4 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.340 EAL: Trying to obtain current memory policy. 00:03:41.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.340 EAL: Restoring previous memory policy: 4 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.340 EAL: request: mp_malloc_sync 00:03:41.340 EAL: No shared files mode enabled, IPC is disabled 00:03:41.340 EAL: Heap on socket 0 was shrunk by 258MB 00:03:41.340 EAL: Trying to obtain current memory policy. 00:03:41.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.599 EAL: Restoring previous memory policy: 4 00:03:41.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.599 EAL: request: mp_malloc_sync 00:03:41.599 EAL: No shared files mode enabled, IPC is disabled 00:03:41.599 EAL: Heap on socket 0 was expanded by 514MB 00:03:41.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.599 EAL: request: mp_malloc_sync 00:03:41.599 EAL: No shared files mode enabled, IPC is disabled 00:03:41.599 EAL: Heap on socket 0 was shrunk by 514MB 00:03:41.599 EAL: Trying to obtain current memory policy. 00:03:41.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.857 EAL: Restoring previous memory policy: 4 00:03:41.857 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.857 EAL: request: mp_malloc_sync 00:03:41.857 EAL: No shared files mode enabled, IPC is disabled 00:03:41.857 EAL: Heap on socket 0 was expanded by 1026MB 00:03:42.117 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.117 EAL: request: mp_malloc_sync 00:03:42.117 EAL: No shared files mode enabled, IPC is disabled 00:03:42.117 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:42.117 passed 00:03:42.117 00:03:42.117 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.117 suites 1 1 n/a 0 0 00:03:42.117 tests 2 2 2 0 0 00:03:42.117 asserts 497 497 497 0 n/a 00:03:42.117 00:03:42.117 Elapsed time = 0.971 seconds 00:03:42.117 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.117 EAL: request: mp_malloc_sync 00:03:42.117 EAL: No shared files mode enabled, IPC is disabled 00:03:42.117 EAL: Heap on socket 0 was shrunk by 2MB 00:03:42.117 EAL: No shared files mode enabled, IPC is disabled 00:03:42.117 EAL: No shared files mode enabled, IPC is disabled 00:03:42.117 EAL: No shared files mode enabled, IPC is disabled 00:03:42.117 00:03:42.117 real 0m1.096s 00:03:42.117 user 0m0.650s 00:03:42.117 sys 0m0.422s 00:03:42.117 12:44:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.117 12:44:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:42.117 ************************************ 00:03:42.117 END TEST env_vtophys 00:03:42.117 ************************************ 00:03:42.117 12:44:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.117 12:44:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.117 12:44:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.117 12:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.376 ************************************ 00:03:42.376 START TEST env_pci 00:03:42.376 ************************************ 00:03:42.376 12:44:02 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.376 00:03:42.376 00:03:42.376 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.376 http://cunit.sourceforge.net/ 00:03:42.376 00:03:42.376 00:03:42.376 Suite: pci 00:03:42.376 Test: pci_hook ...[2024-10-15 12:44:02.481963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1020881 has claimed it 00:03:42.376 EAL: Cannot find device (10000:00:01.0) 00:03:42.376 EAL: Failed to attach device on primary process 00:03:42.376 passed 00:03:42.376 00:03:42.376 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.376 suites 1 1 n/a 0 0 00:03:42.376 tests 1 1 1 0 0 00:03:42.376 asserts 25 25 25 0 n/a 00:03:42.376 00:03:42.376 Elapsed time = 0.026 seconds 00:03:42.376 00:03:42.376 real 0m0.045s 00:03:42.376 user 0m0.015s 00:03:42.376 sys 0m0.030s 00:03:42.376 12:44:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.376 12:44:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:42.376 ************************************ 00:03:42.376 END TEST env_pci 00:03:42.376 ************************************ 00:03:42.376 12:44:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:42.376 12:44:02 env -- env/env.sh@15 -- # uname 00:03:42.376 12:44:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:42.376 12:44:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:42.376 12:44:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.376 12:44:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:42.376 12:44:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.376 12:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.376 ************************************ 00:03:42.376 START TEST env_dpdk_post_init 00:03:42.376 ************************************ 00:03:42.376 12:44:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.376 EAL: Detected CPU lcores: 96 00:03:42.376 EAL: Detected NUMA nodes: 2 00:03:42.376 EAL: Detected shared linkage of DPDK 00:03:42.376 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:42.376 EAL: Selected IOVA mode 'VA' 00:03:42.376 EAL: VFIO support initialized 00:03:42.376 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:42.635 EAL: Using IOMMU type 1 (Type 1) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:42.635 EAL: Ignore mapping IO port bar(1) 00:03:42.635 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:43.572 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:43.572 EAL: Ignore mapping IO port bar(1) 00:03:43.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:46.857 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:46.857 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:47.426 Starting DPDK initialization... 00:03:47.426 Starting SPDK post initialization... 00:03:47.426 SPDK NVMe probe 00:03:47.426 Attaching to 0000:5e:00.0 00:03:47.426 Attached to 0000:5e:00.0 00:03:47.426 Cleaning up... 00:03:47.426 00:03:47.426 real 0m4.948s 00:03:47.426 user 0m3.499s 00:03:47.426 sys 0m0.514s 00:03:47.426 12:44:07 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.426 12:44:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.426 ************************************ 00:03:47.426 END TEST env_dpdk_post_init 00:03:47.426 ************************************ 00:03:47.426 12:44:07 env -- env/env.sh@26 -- # uname 00:03:47.426 12:44:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:47.426 12:44:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:47.426 12:44:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.426 12:44:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.426 12:44:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.426 ************************************ 00:03:47.426 START TEST env_mem_callbacks 00:03:47.426 ************************************ 00:03:47.426 12:44:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:47.426 EAL: Detected CPU lcores: 96 00:03:47.426 EAL: Detected NUMA nodes: 2 00:03:47.426 EAL: Detected shared linkage of DPDK 00:03:47.426 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:47.426 EAL: Selected IOVA mode 'VA' 00:03:47.426 EAL: VFIO support initialized 00:03:47.426 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:47.426 00:03:47.426 00:03:47.426 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.426 http://cunit.sourceforge.net/ 00:03:47.426 00:03:47.426 00:03:47.426 Suite: memory 00:03:47.426 Test: test ... 00:03:47.426 register 0x200000200000 2097152 00:03:47.426 malloc 3145728 00:03:47.426 register 0x200000400000 4194304 00:03:47.426 buf 0x200000500000 len 3145728 PASSED 00:03:47.426 malloc 64 00:03:47.426 buf 0x2000004fff40 len 64 PASSED 00:03:47.426 malloc 4194304 00:03:47.426 register 0x200000800000 6291456 00:03:47.426 buf 0x200000a00000 len 4194304 PASSED 00:03:47.426 free 0x200000500000 3145728 00:03:47.426 free 0x2000004fff40 64 00:03:47.426 unregister 0x200000400000 4194304 PASSED 00:03:47.426 free 0x200000a00000 4194304 00:03:47.426 unregister 0x200000800000 6291456 PASSED 00:03:47.426 malloc 8388608 00:03:47.426 register 0x200000400000 10485760 00:03:47.426 buf 0x200000600000 len 8388608 PASSED 00:03:47.426 free 0x200000600000 8388608 00:03:47.426 unregister 0x200000400000 10485760 PASSED 00:03:47.426 passed 00:03:47.426 00:03:47.426 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.426 suites 1 1 n/a 0 0 00:03:47.426 tests 1 1 1 0 0 00:03:47.426 asserts 15 15 15 0 n/a 00:03:47.426 00:03:47.426 Elapsed time = 0.008 seconds 00:03:47.426 00:03:47.426 real 0m0.060s 00:03:47.426 user 0m0.024s 00:03:47.426 sys 0m0.036s 00:03:47.426 12:44:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.426 12:44:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:47.426 ************************************ 00:03:47.426 END TEST env_mem_callbacks 00:03:47.426 ************************************ 00:03:47.426 00:03:47.426 real 0m6.828s 00:03:47.426 user 0m4.562s 00:03:47.426 sys 0m1.343s 00:03:47.426 12:44:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.426 12:44:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.426 ************************************ 00:03:47.426 END TEST env 00:03:47.426 ************************************ 00:03:47.426 12:44:07 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:47.426 12:44:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.426 12:44:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.426 12:44:07 -- common/autotest_common.sh@10 -- # set +x 00:03:47.685 ************************************ 00:03:47.686 START TEST rpc 00:03:47.686 ************************************ 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:47.686 * Looking for test storage... 00:03:47.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.686 12:44:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.686 12:44:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.686 12:44:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.686 12:44:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.686 12:44:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.686 12:44:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:47.686 12:44:07 rpc -- scripts/common.sh@345 -- # : 1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.686 12:44:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.686 12:44:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@353 -- # local d=1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.686 12:44:07 rpc -- scripts/common.sh@355 -- # echo 1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.686 12:44:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@353 -- # local d=2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.686 12:44:07 rpc -- scripts/common.sh@355 -- # echo 2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.686 12:44:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.686 12:44:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.686 12:44:07 rpc -- scripts/common.sh@368 -- # return 0 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.686 --rc genhtml_branch_coverage=1 00:03:47.686 --rc genhtml_function_coverage=1 00:03:47.686 --rc genhtml_legend=1 00:03:47.686 --rc geninfo_all_blocks=1 00:03:47.686 --rc geninfo_unexecuted_blocks=1 00:03:47.686 00:03:47.686 ' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.686 --rc genhtml_branch_coverage=1 00:03:47.686 --rc genhtml_function_coverage=1 00:03:47.686 --rc genhtml_legend=1 00:03:47.686 --rc geninfo_all_blocks=1 00:03:47.686 --rc geninfo_unexecuted_blocks=1 00:03:47.686 00:03:47.686 ' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.686 --rc genhtml_branch_coverage=1 00:03:47.686 --rc genhtml_function_coverage=1 00:03:47.686 --rc genhtml_legend=1 00:03:47.686 --rc geninfo_all_blocks=1 00:03:47.686 --rc geninfo_unexecuted_blocks=1 00:03:47.686 00:03:47.686 ' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.686 --rc genhtml_branch_coverage=1 00:03:47.686 --rc genhtml_function_coverage=1 00:03:47.686 --rc genhtml_legend=1 00:03:47.686 --rc geninfo_all_blocks=1 00:03:47.686 --rc geninfo_unexecuted_blocks=1 00:03:47.686 00:03:47.686 ' 00:03:47.686 12:44:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1021930 00:03:47.686 12:44:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.686 12:44:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:47.686 12:44:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1021930 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@831 -- # '[' -z 1021930 ']' 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:47.686 12:44:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.686 [2024-10-15 12:44:08.006693] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:03:47.686 [2024-10-15 12:44:08.006741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021930 ] 00:03:48.016 [2024-10-15 12:44:08.075500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.016 [2024-10-15 12:44:08.116726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.016 [2024-10-15 12:44:08.116761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1021930' to capture a snapshot of events at runtime. 00:03:48.016 [2024-10-15 12:44:08.116768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.016 [2024-10-15 12:44:08.116774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.016 [2024-10-15 12:44:08.116780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1021930 for offline analysis/debug. 00:03:48.016 [2024-10-15 12:44:08.117342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.341 12:44:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:48.341 12:44:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:48.341 12:44:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.341 12:44:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.341 12:44:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:48.341 12:44:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:48.341 12:44:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.341 12:44:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.341 12:44:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 ************************************ 00:03:48.341 START TEST rpc_integrity 00:03:48.341 ************************************ 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.341 { 00:03:48.341 "name": "Malloc0", 00:03:48.341 "aliases": [ 00:03:48.341 "8545aad0-6538-480d-abbf-4b68c7a6bea0" 00:03:48.341 ], 00:03:48.341 "product_name": "Malloc disk", 00:03:48.341 "block_size": 512, 00:03:48.341 "num_blocks": 16384, 00:03:48.341 "uuid": "8545aad0-6538-480d-abbf-4b68c7a6bea0", 00:03:48.341 "assigned_rate_limits": { 00:03:48.341 "rw_ios_per_sec": 0, 00:03:48.341 "rw_mbytes_per_sec": 0, 00:03:48.341 "r_mbytes_per_sec": 0, 00:03:48.341 "w_mbytes_per_sec": 0 00:03:48.341 }, 00:03:48.341 "claimed": false, 00:03:48.341 "zoned": false, 00:03:48.341 "supported_io_types": { 00:03:48.341 "read": true, 00:03:48.341 "write": true, 00:03:48.341 "unmap": true, 00:03:48.341 "flush": true, 00:03:48.341 "reset": true, 00:03:48.341 "nvme_admin": false, 00:03:48.341 "nvme_io": false, 00:03:48.341 "nvme_io_md": false, 00:03:48.341 "write_zeroes": true, 00:03:48.341 "zcopy": true, 00:03:48.341 "get_zone_info": false, 00:03:48.341 "zone_management": false, 00:03:48.341 "zone_append": false, 00:03:48.341 "compare": false, 00:03:48.341 "compare_and_write": false, 00:03:48.341 "abort": true, 00:03:48.341 "seek_hole": false, 00:03:48.341 "seek_data": false, 00:03:48.341 "copy": true, 00:03:48.341 "nvme_iov_md": false 00:03:48.341 }, 00:03:48.341 "memory_domains": [ 00:03:48.341 { 00:03:48.341 "dma_device_id": "system", 00:03:48.341 "dma_device_type": 1 00:03:48.341 }, 00:03:48.341 { 00:03:48.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.341 "dma_device_type": 2 00:03:48.341 } 00:03:48.341 ], 00:03:48.341 "driver_specific": {} 00:03:48.341 } 00:03:48.341 ]' 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 [2024-10-15 12:44:08.496454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:48.341 [2024-10-15 12:44:08.496481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.341 [2024-10-15 12:44:08.496492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d84790 00:03:48.341 [2024-10-15 12:44:08.496498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.341 [2024-10-15 12:44:08.497567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.341 [2024-10-15 12:44:08.497589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.341 Passthru0 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.341 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.341 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.341 { 00:03:48.341 "name": "Malloc0", 00:03:48.341 "aliases": [ 00:03:48.341 "8545aad0-6538-480d-abbf-4b68c7a6bea0" 00:03:48.341 ], 00:03:48.341 "product_name": "Malloc disk", 00:03:48.341 "block_size": 512, 00:03:48.341 "num_blocks": 16384, 00:03:48.341 "uuid": "8545aad0-6538-480d-abbf-4b68c7a6bea0", 00:03:48.341 "assigned_rate_limits": { 00:03:48.341 "rw_ios_per_sec": 0, 00:03:48.341 "rw_mbytes_per_sec": 0, 00:03:48.341 "r_mbytes_per_sec": 0, 00:03:48.341 "w_mbytes_per_sec": 0 00:03:48.341 }, 00:03:48.341 "claimed": true, 00:03:48.341 "claim_type": "exclusive_write", 00:03:48.341 "zoned": false, 00:03:48.341 "supported_io_types": { 00:03:48.341 "read": true, 00:03:48.341 "write": true, 00:03:48.341 "unmap": true, 00:03:48.341 "flush": true, 00:03:48.341 "reset": true, 00:03:48.341 "nvme_admin": false, 00:03:48.341 "nvme_io": false, 00:03:48.341 "nvme_io_md": false, 00:03:48.341 "write_zeroes": true, 00:03:48.341 "zcopy": true, 00:03:48.341 "get_zone_info": false, 00:03:48.341 "zone_management": false, 00:03:48.341 "zone_append": false, 00:03:48.341 "compare": false, 00:03:48.341 "compare_and_write": false, 00:03:48.342 "abort": true, 00:03:48.342 "seek_hole": false, 00:03:48.342 "seek_data": false, 00:03:48.342 "copy": true, 00:03:48.342 "nvme_iov_md": false 00:03:48.342 }, 00:03:48.342 "memory_domains": [ 00:03:48.342 { 00:03:48.342 "dma_device_id": "system", 00:03:48.342 "dma_device_type": 1 00:03:48.342 }, 00:03:48.342 { 00:03:48.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.342 "dma_device_type": 2 00:03:48.342 } 00:03:48.342 ], 00:03:48.342 "driver_specific": {} 00:03:48.342 }, 00:03:48.342 { 00:03:48.342 "name": "Passthru0", 00:03:48.342 "aliases": [ 00:03:48.342 "c6cafad1-8bfa-59e5-a23c-5800dda3ae74" 00:03:48.342 ], 00:03:48.342 "product_name": "passthru", 00:03:48.342 "block_size": 512, 00:03:48.342 "num_blocks": 16384, 00:03:48.342 "uuid": "c6cafad1-8bfa-59e5-a23c-5800dda3ae74", 00:03:48.342 "assigned_rate_limits": { 00:03:48.342 "rw_ios_per_sec": 0, 00:03:48.342 "rw_mbytes_per_sec": 0, 00:03:48.342 "r_mbytes_per_sec": 0, 00:03:48.342 "w_mbytes_per_sec": 0 00:03:48.342 }, 00:03:48.342 "claimed": false, 00:03:48.342 "zoned": false, 00:03:48.342 "supported_io_types": { 00:03:48.342 "read": true, 00:03:48.342 "write": true, 00:03:48.342 "unmap": true, 00:03:48.342 "flush": true, 00:03:48.342 "reset": true, 00:03:48.342 "nvme_admin": false, 00:03:48.342 "nvme_io": false, 00:03:48.342 "nvme_io_md": false, 00:03:48.342 "write_zeroes": true, 00:03:48.342 "zcopy": true, 00:03:48.342 "get_zone_info": false, 00:03:48.342 "zone_management": false, 00:03:48.342 "zone_append": false, 00:03:48.342 "compare": false, 00:03:48.342 "compare_and_write": false, 00:03:48.342 "abort": true, 00:03:48.342 "seek_hole": false, 00:03:48.342 "seek_data": false, 00:03:48.342 "copy": true, 00:03:48.342 "nvme_iov_md": false 00:03:48.342 }, 00:03:48.342 "memory_domains": [ 00:03:48.342 { 00:03:48.342 "dma_device_id": "system", 00:03:48.342 "dma_device_type": 1 00:03:48.342 }, 00:03:48.342 { 00:03:48.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.342 "dma_device_type": 2 00:03:48.342 } 00:03:48.342 ], 00:03:48.342 "driver_specific": { 00:03:48.342 "passthru": { 00:03:48.342 "name": "Passthru0", 00:03:48.342 "base_bdev_name": "Malloc0" 00:03:48.342 } 00:03:48.342 } 00:03:48.342 } 00:03:48.342 ]' 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.342 12:44:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.342 00:03:48.342 real 0m0.268s 00:03:48.342 user 0m0.175s 00:03:48.342 sys 0m0.027s 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.342 12:44:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.342 ************************************ 00:03:48.342 END TEST rpc_integrity 00:03:48.342 ************************************ 00:03:48.601 12:44:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 ************************************ 00:03:48.601 START TEST rpc_plugins 00:03:48.601 ************************************ 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:48.601 { 00:03:48.601 "name": "Malloc1", 00:03:48.601 "aliases": [ 00:03:48.601 "df42a919-07bb-41cd-985a-e88cf1b9859d" 00:03:48.601 ], 00:03:48.601 "product_name": "Malloc disk", 00:03:48.601 "block_size": 4096, 00:03:48.601 "num_blocks": 256, 00:03:48.601 "uuid": "df42a919-07bb-41cd-985a-e88cf1b9859d", 00:03:48.601 "assigned_rate_limits": { 00:03:48.601 "rw_ios_per_sec": 0, 00:03:48.601 "rw_mbytes_per_sec": 0, 00:03:48.601 "r_mbytes_per_sec": 0, 00:03:48.601 "w_mbytes_per_sec": 0 00:03:48.601 }, 00:03:48.601 "claimed": false, 00:03:48.601 "zoned": false, 00:03:48.601 "supported_io_types": { 00:03:48.601 "read": true, 00:03:48.601 "write": true, 00:03:48.601 "unmap": true, 00:03:48.601 "flush": true, 00:03:48.601 "reset": true, 00:03:48.601 "nvme_admin": false, 00:03:48.601 "nvme_io": false, 00:03:48.601 "nvme_io_md": false, 00:03:48.601 "write_zeroes": true, 00:03:48.601 "zcopy": true, 00:03:48.601 "get_zone_info": false, 00:03:48.601 "zone_management": false, 00:03:48.601 "zone_append": false, 00:03:48.601 "compare": false, 00:03:48.601 "compare_and_write": false, 00:03:48.601 "abort": true, 00:03:48.601 "seek_hole": false, 00:03:48.601 "seek_data": false, 00:03:48.601 "copy": true, 00:03:48.601 "nvme_iov_md": false 00:03:48.601 }, 00:03:48.601 "memory_domains": [ 00:03:48.601 { 00:03:48.601 "dma_device_id": "system", 00:03:48.601 "dma_device_type": 1 00:03:48.601 }, 00:03:48.601 { 00:03:48.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.601 "dma_device_type": 2 00:03:48.601 } 00:03:48.601 ], 00:03:48.601 "driver_specific": {} 00:03:48.601 } 00:03:48.601 ]' 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:48.601 12:44:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:48.601 00:03:48.601 real 0m0.144s 00:03:48.601 user 0m0.089s 00:03:48.601 sys 0m0.019s 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 ************************************ 00:03:48.601 END TEST rpc_plugins 00:03:48.601 ************************************ 00:03:48.601 12:44:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.601 12:44:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.601 ************************************ 00:03:48.601 START TEST rpc_trace_cmd_test 00:03:48.601 ************************************ 00:03:48.601 12:44:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:48.601 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.601 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.601 12:44:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.601 12:44:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.859 12:44:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.859 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.859 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1021930", 00:03:48.859 "tpoint_group_mask": "0x8", 00:03:48.859 "iscsi_conn": { 00:03:48.859 "mask": "0x2", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.859 }, 00:03:48.859 "scsi": { 00:03:48.859 "mask": "0x4", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.859 }, 00:03:48.859 "bdev": { 00:03:48.859 "mask": "0x8", 00:03:48.859 "tpoint_mask": "0xffffffffffffffff" 00:03:48.859 }, 00:03:48.859 "nvmf_rdma": { 00:03:48.859 "mask": "0x10", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.859 }, 00:03:48.859 "nvmf_tcp": { 00:03:48.859 "mask": "0x20", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.859 }, 00:03:48.859 "ftl": { 00:03:48.859 "mask": "0x40", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.859 }, 00:03:48.859 "blobfs": { 00:03:48.859 "mask": "0x80", 00:03:48.859 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "dsa": { 00:03:48.860 "mask": "0x200", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "thread": { 00:03:48.860 "mask": "0x400", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "nvme_pcie": { 00:03:48.860 "mask": "0x800", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "iaa": { 00:03:48.860 "mask": "0x1000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "nvme_tcp": { 00:03:48.860 "mask": "0x2000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "bdev_nvme": { 00:03:48.860 "mask": "0x4000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "sock": { 00:03:48.860 "mask": "0x8000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "blob": { 00:03:48.860 "mask": "0x10000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "bdev_raid": { 00:03:48.860 "mask": "0x20000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 }, 00:03:48.860 "scheduler": { 00:03:48.860 "mask": "0x40000", 00:03:48.860 "tpoint_mask": "0x0" 00:03:48.860 } 00:03:48.860 }' 00:03:48.860 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.860 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.860 12:44:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.860 00:03:48.860 real 0m0.214s 00:03:48.860 user 0m0.181s 00:03:48.860 sys 0m0.022s 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.860 12:44:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.860 ************************************ 00:03:48.860 END TEST rpc_trace_cmd_test 00:03:48.860 ************************************ 00:03:48.860 12:44:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.860 12:44:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.860 12:44:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.860 12:44:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.860 12:44:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.860 12:44:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.118 ************************************ 00:03:49.118 START TEST rpc_daemon_integrity 00:03:49.118 ************************************ 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.118 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.118 { 00:03:49.118 "name": "Malloc2", 00:03:49.118 "aliases": [ 00:03:49.118 "df068d4c-a6ae-4390-89a7-f5f8b8fb33da" 00:03:49.118 ], 00:03:49.118 "product_name": "Malloc disk", 00:03:49.118 "block_size": 512, 00:03:49.118 "num_blocks": 16384, 00:03:49.118 "uuid": "df068d4c-a6ae-4390-89a7-f5f8b8fb33da", 00:03:49.118 "assigned_rate_limits": { 00:03:49.118 "rw_ios_per_sec": 0, 00:03:49.118 "rw_mbytes_per_sec": 0, 00:03:49.118 "r_mbytes_per_sec": 0, 00:03:49.118 "w_mbytes_per_sec": 0 00:03:49.118 }, 00:03:49.118 "claimed": false, 00:03:49.119 "zoned": false, 00:03:49.119 "supported_io_types": { 00:03:49.119 "read": true, 00:03:49.119 "write": true, 00:03:49.119 "unmap": true, 00:03:49.119 "flush": true, 00:03:49.119 "reset": true, 00:03:49.119 "nvme_admin": false, 00:03:49.119 "nvme_io": false, 00:03:49.119 "nvme_io_md": false, 00:03:49.119 "write_zeroes": true, 00:03:49.119 "zcopy": true, 00:03:49.119 "get_zone_info": false, 00:03:49.119 "zone_management": false, 00:03:49.119 "zone_append": false, 00:03:49.119 "compare": false, 00:03:49.119 "compare_and_write": false, 00:03:49.119 "abort": true, 00:03:49.119 "seek_hole": false, 00:03:49.119 "seek_data": false, 00:03:49.119 "copy": true, 00:03:49.119 "nvme_iov_md": false 00:03:49.119 }, 00:03:49.119 "memory_domains": [ 00:03:49.119 { 00:03:49.119 "dma_device_id": "system", 00:03:49.119 "dma_device_type": 1 00:03:49.119 }, 00:03:49.119 { 00:03:49.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.119 "dma_device_type": 2 00:03:49.119 } 00:03:49.119 ], 00:03:49.119 "driver_specific": {} 00:03:49.119 } 00:03:49.119 ]' 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.119 [2024-10-15 12:44:09.334745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:49.119 [2024-10-15 12:44:09.334771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.119 [2024-10-15 12:44:09.334785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d85330 00:03:49.119 [2024-10-15 12:44:09.334791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.119 [2024-10-15 12:44:09.335867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.119 [2024-10-15 12:44:09.335887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.119 Passthru0 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.119 { 00:03:49.119 "name": "Malloc2", 00:03:49.119 "aliases": [ 00:03:49.119 "df068d4c-a6ae-4390-89a7-f5f8b8fb33da" 00:03:49.119 ], 00:03:49.119 "product_name": "Malloc disk", 00:03:49.119 "block_size": 512, 00:03:49.119 "num_blocks": 16384, 00:03:49.119 "uuid": "df068d4c-a6ae-4390-89a7-f5f8b8fb33da", 00:03:49.119 "assigned_rate_limits": { 00:03:49.119 "rw_ios_per_sec": 0, 00:03:49.119 "rw_mbytes_per_sec": 0, 00:03:49.119 "r_mbytes_per_sec": 0, 00:03:49.119 "w_mbytes_per_sec": 0 00:03:49.119 }, 00:03:49.119 "claimed": true, 00:03:49.119 "claim_type": "exclusive_write", 00:03:49.119 "zoned": false, 00:03:49.119 "supported_io_types": { 00:03:49.119 "read": true, 00:03:49.119 "write": true, 00:03:49.119 "unmap": true, 00:03:49.119 "flush": true, 00:03:49.119 "reset": true, 00:03:49.119 "nvme_admin": false, 00:03:49.119 "nvme_io": false, 00:03:49.119 "nvme_io_md": false, 00:03:49.119 "write_zeroes": true, 00:03:49.119 "zcopy": true, 00:03:49.119 "get_zone_info": false, 00:03:49.119 "zone_management": false, 00:03:49.119 "zone_append": false, 00:03:49.119 "compare": false, 00:03:49.119 "compare_and_write": false, 00:03:49.119 "abort": true, 00:03:49.119 "seek_hole": false, 00:03:49.119 "seek_data": false, 00:03:49.119 "copy": true, 00:03:49.119 "nvme_iov_md": false 00:03:49.119 }, 00:03:49.119 "memory_domains": [ 00:03:49.119 { 00:03:49.119 "dma_device_id": "system", 00:03:49.119 "dma_device_type": 1 00:03:49.119 }, 00:03:49.119 { 00:03:49.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.119 "dma_device_type": 2 00:03:49.119 } 00:03:49.119 ], 00:03:49.119 "driver_specific": {} 00:03:49.119 }, 00:03:49.119 { 00:03:49.119 "name": "Passthru0", 00:03:49.119 "aliases": [ 00:03:49.119 "2c312393-ae7a-533f-b950-fbef57d9cc5e" 00:03:49.119 ], 00:03:49.119 "product_name": "passthru", 00:03:49.119 "block_size": 512, 00:03:49.119 "num_blocks": 16384, 00:03:49.119 "uuid": "2c312393-ae7a-533f-b950-fbef57d9cc5e", 00:03:49.119 "assigned_rate_limits": { 00:03:49.119 "rw_ios_per_sec": 0, 00:03:49.119 "rw_mbytes_per_sec": 0, 00:03:49.119 "r_mbytes_per_sec": 0, 00:03:49.119 "w_mbytes_per_sec": 0 00:03:49.119 }, 00:03:49.119 "claimed": false, 00:03:49.119 "zoned": false, 00:03:49.119 "supported_io_types": { 00:03:49.119 "read": true, 00:03:49.119 "write": true, 00:03:49.119 "unmap": true, 00:03:49.119 "flush": true, 00:03:49.119 "reset": true, 00:03:49.119 "nvme_admin": false, 00:03:49.119 "nvme_io": false, 00:03:49.119 "nvme_io_md": false, 00:03:49.119 "write_zeroes": true, 00:03:49.119 "zcopy": true, 00:03:49.119 "get_zone_info": false, 00:03:49.119 "zone_management": false, 00:03:49.119 "zone_append": false, 00:03:49.119 "compare": false, 00:03:49.119 "compare_and_write": false, 00:03:49.119 "abort": true, 00:03:49.119 "seek_hole": false, 00:03:49.119 "seek_data": false, 00:03:49.119 "copy": true, 00:03:49.119 "nvme_iov_md": false 00:03:49.119 }, 00:03:49.119 "memory_domains": [ 00:03:49.119 { 00:03:49.119 "dma_device_id": "system", 00:03:49.119 "dma_device_type": 1 00:03:49.119 }, 00:03:49.119 { 00:03:49.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.119 "dma_device_type": 2 00:03:49.119 } 00:03:49.119 ], 00:03:49.119 "driver_specific": { 00:03:49.119 "passthru": { 00:03:49.119 "name": "Passthru0", 00:03:49.119 "base_bdev_name": "Malloc2" 00:03:49.119 } 00:03:49.119 } 00:03:49.119 } 00:03:49.119 ]' 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.119 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.377 12:44:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.377 00:03:49.377 real 0m0.275s 00:03:49.377 user 0m0.167s 00:03:49.377 sys 0m0.038s 00:03:49.377 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.377 12:44:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.377 ************************************ 00:03:49.377 END TEST rpc_daemon_integrity 00:03:49.377 ************************************ 00:03:49.377 12:44:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:49.378 12:44:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1021930 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 1021930 ']' 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@954 -- # kill -0 1021930 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@955 -- # uname 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1021930 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1021930' 00:03:49.378 killing process with pid 1021930 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@969 -- # kill 1021930 00:03:49.378 12:44:09 rpc -- common/autotest_common.sh@974 -- # wait 1021930 00:03:49.641 00:03:49.641 real 0m2.072s 00:03:49.641 user 0m2.640s 00:03:49.641 sys 0m0.674s 00:03:49.641 12:44:09 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.641 12:44:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.641 ************************************ 00:03:49.641 END TEST rpc 00:03:49.641 ************************************ 00:03:49.641 12:44:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.641 12:44:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.641 12:44:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.641 12:44:09 -- common/autotest_common.sh@10 -- # set +x 00:03:49.641 ************************************ 00:03:49.641 START TEST skip_rpc 00:03:49.641 ************************************ 00:03:49.641 12:44:09 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.900 * Looking for test storage... 00:03:49.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.900 12:44:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.900 --rc genhtml_branch_coverage=1 00:03:49.900 --rc genhtml_function_coverage=1 00:03:49.900 --rc genhtml_legend=1 00:03:49.900 --rc geninfo_all_blocks=1 00:03:49.900 --rc geninfo_unexecuted_blocks=1 00:03:49.900 00:03:49.900 ' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.900 --rc genhtml_branch_coverage=1 00:03:49.900 --rc genhtml_function_coverage=1 00:03:49.900 --rc genhtml_legend=1 00:03:49.900 --rc geninfo_all_blocks=1 00:03:49.900 --rc geninfo_unexecuted_blocks=1 00:03:49.900 00:03:49.900 ' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.900 --rc genhtml_branch_coverage=1 00:03:49.900 --rc genhtml_function_coverage=1 00:03:49.900 --rc genhtml_legend=1 00:03:49.900 --rc geninfo_all_blocks=1 00:03:49.900 --rc geninfo_unexecuted_blocks=1 00:03:49.900 00:03:49.900 ' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.900 --rc genhtml_branch_coverage=1 00:03:49.900 --rc genhtml_function_coverage=1 00:03:49.900 --rc genhtml_legend=1 00:03:49.900 --rc geninfo_all_blocks=1 00:03:49.900 --rc geninfo_unexecuted_blocks=1 00:03:49.900 00:03:49.900 ' 00:03:49.900 12:44:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.900 12:44:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.900 12:44:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.900 12:44:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.900 ************************************ 00:03:49.900 START TEST skip_rpc 00:03:49.900 ************************************ 00:03:49.900 12:44:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:49.900 12:44:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1022476 00:03:49.900 12:44:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.900 12:44:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.900 12:44:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.900 [2024-10-15 12:44:10.180731] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:03:49.900 [2024-10-15 12:44:10.180772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022476 ] 00:03:50.159 [2024-10-15 12:44:10.247760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.159 [2024-10-15 12:44:10.291486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:55.428 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1022476 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1022476 ']' 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1022476 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1022476 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1022476' 00:03:55.429 killing process with pid 1022476 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1022476 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1022476 00:03:55.429 00:03:55.429 real 0m5.361s 00:03:55.429 user 0m5.111s 00:03:55.429 sys 0m0.283s 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.429 12:44:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.429 ************************************ 00:03:55.429 END TEST skip_rpc 00:03:55.429 ************************************ 00:03:55.429 12:44:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:55.429 12:44:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.429 12:44:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.429 12:44:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.429 ************************************ 00:03:55.429 START TEST skip_rpc_with_json 00:03:55.429 ************************************ 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1023399 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1023399 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1023399 ']' 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:55.429 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.429 [2024-10-15 12:44:15.615380] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:03:55.429 [2024-10-15 12:44:15.615426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023399 ] 00:03:55.429 [2024-10-15 12:44:15.680874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.429 [2024-10-15 12:44:15.722834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.688 [2024-10-15 12:44:15.937225] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:55.688 request: 00:03:55.688 { 00:03:55.688 "trtype": "tcp", 00:03:55.688 "method": "nvmf_get_transports", 00:03:55.688 "req_id": 1 00:03:55.688 } 00:03:55.688 Got JSON-RPC error response 00:03:55.688 response: 00:03:55.688 { 00:03:55.688 "code": -19, 00:03:55.688 "message": "No such device" 00:03:55.688 } 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.688 [2024-10-15 12:44:15.949332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.688 12:44:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.948 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.948 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.948 { 00:03:55.948 "subsystems": [ 00:03:55.948 { 00:03:55.948 "subsystem": "fsdev", 00:03:55.948 "config": [ 00:03:55.948 { 00:03:55.948 "method": "fsdev_set_opts", 00:03:55.948 "params": { 00:03:55.948 "fsdev_io_pool_size": 65535, 00:03:55.948 "fsdev_io_cache_size": 256 00:03:55.948 } 00:03:55.948 } 00:03:55.948 ] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "vfio_user_target", 00:03:55.948 "config": null 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "keyring", 00:03:55.948 "config": [] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "iobuf", 00:03:55.948 "config": [ 00:03:55.948 { 00:03:55.948 "method": "iobuf_set_options", 00:03:55.948 "params": { 00:03:55.948 "small_pool_count": 8192, 00:03:55.948 "large_pool_count": 1024, 00:03:55.948 "small_bufsize": 8192, 00:03:55.948 "large_bufsize": 135168 00:03:55.948 } 00:03:55.948 } 00:03:55.948 ] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "sock", 00:03:55.948 "config": [ 00:03:55.948 { 00:03:55.948 "method": "sock_set_default_impl", 00:03:55.948 "params": { 00:03:55.948 "impl_name": "posix" 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "sock_impl_set_options", 00:03:55.948 "params": { 00:03:55.948 "impl_name": "ssl", 00:03:55.948 "recv_buf_size": 4096, 00:03:55.948 "send_buf_size": 4096, 00:03:55.948 "enable_recv_pipe": true, 00:03:55.948 "enable_quickack": false, 00:03:55.948 "enable_placement_id": 0, 00:03:55.948 "enable_zerocopy_send_server": true, 00:03:55.948 "enable_zerocopy_send_client": false, 00:03:55.948 "zerocopy_threshold": 0, 00:03:55.948 "tls_version": 0, 00:03:55.948 "enable_ktls": false 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "sock_impl_set_options", 00:03:55.948 "params": { 00:03:55.948 "impl_name": "posix", 00:03:55.948 "recv_buf_size": 2097152, 00:03:55.948 "send_buf_size": 2097152, 00:03:55.948 "enable_recv_pipe": true, 00:03:55.948 "enable_quickack": false, 00:03:55.948 "enable_placement_id": 0, 00:03:55.948 "enable_zerocopy_send_server": true, 00:03:55.948 "enable_zerocopy_send_client": false, 00:03:55.948 "zerocopy_threshold": 0, 00:03:55.948 "tls_version": 0, 00:03:55.948 "enable_ktls": false 00:03:55.948 } 00:03:55.948 } 00:03:55.948 ] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "vmd", 00:03:55.948 "config": [] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "accel", 00:03:55.948 "config": [ 00:03:55.948 { 00:03:55.948 "method": "accel_set_options", 00:03:55.948 "params": { 00:03:55.948 "small_cache_size": 128, 00:03:55.948 "large_cache_size": 16, 00:03:55.948 "task_count": 2048, 00:03:55.948 "sequence_count": 2048, 00:03:55.948 "buf_count": 2048 00:03:55.948 } 00:03:55.948 } 00:03:55.948 ] 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "subsystem": "bdev", 00:03:55.948 "config": [ 00:03:55.948 { 00:03:55.948 "method": "bdev_set_options", 00:03:55.948 "params": { 00:03:55.948 "bdev_io_pool_size": 65535, 00:03:55.948 "bdev_io_cache_size": 256, 00:03:55.948 "bdev_auto_examine": true, 00:03:55.948 "iobuf_small_cache_size": 128, 00:03:55.948 "iobuf_large_cache_size": 16 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "bdev_raid_set_options", 00:03:55.948 "params": { 00:03:55.948 "process_window_size_kb": 1024, 00:03:55.948 "process_max_bandwidth_mb_sec": 0 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "bdev_iscsi_set_options", 00:03:55.948 "params": { 00:03:55.948 "timeout_sec": 30 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "bdev_nvme_set_options", 00:03:55.948 "params": { 00:03:55.948 "action_on_timeout": "none", 00:03:55.948 "timeout_us": 0, 00:03:55.948 "timeout_admin_us": 0, 00:03:55.948 "keep_alive_timeout_ms": 10000, 00:03:55.948 "arbitration_burst": 0, 00:03:55.948 "low_priority_weight": 0, 00:03:55.948 "medium_priority_weight": 0, 00:03:55.948 "high_priority_weight": 0, 00:03:55.948 "nvme_adminq_poll_period_us": 10000, 00:03:55.948 "nvme_ioq_poll_period_us": 0, 00:03:55.948 "io_queue_requests": 0, 00:03:55.948 "delay_cmd_submit": true, 00:03:55.948 "transport_retry_count": 4, 00:03:55.948 "bdev_retry_count": 3, 00:03:55.948 "transport_ack_timeout": 0, 00:03:55.948 "ctrlr_loss_timeout_sec": 0, 00:03:55.948 "reconnect_delay_sec": 0, 00:03:55.948 "fast_io_fail_timeout_sec": 0, 00:03:55.948 "disable_auto_failback": false, 00:03:55.948 "generate_uuids": false, 00:03:55.948 "transport_tos": 0, 00:03:55.948 "nvme_error_stat": false, 00:03:55.948 "rdma_srq_size": 0, 00:03:55.948 "io_path_stat": false, 00:03:55.948 "allow_accel_sequence": false, 00:03:55.948 "rdma_max_cq_size": 0, 00:03:55.948 "rdma_cm_event_timeout_ms": 0, 00:03:55.948 "dhchap_digests": [ 00:03:55.948 "sha256", 00:03:55.948 "sha384", 00:03:55.948 "sha512" 00:03:55.948 ], 00:03:55.948 "dhchap_dhgroups": [ 00:03:55.948 "null", 00:03:55.948 "ffdhe2048", 00:03:55.948 "ffdhe3072", 00:03:55.948 "ffdhe4096", 00:03:55.948 "ffdhe6144", 00:03:55.948 "ffdhe8192" 00:03:55.948 ] 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "bdev_nvme_set_hotplug", 00:03:55.948 "params": { 00:03:55.948 "period_us": 100000, 00:03:55.948 "enable": false 00:03:55.948 } 00:03:55.948 }, 00:03:55.948 { 00:03:55.948 "method": "bdev_wait_for_examine" 00:03:55.948 } 00:03:55.949 ] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "scsi", 00:03:55.949 "config": null 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "scheduler", 00:03:55.949 "config": [ 00:03:55.949 { 00:03:55.949 "method": "framework_set_scheduler", 00:03:55.949 "params": { 00:03:55.949 "name": "static" 00:03:55.949 } 00:03:55.949 } 00:03:55.949 ] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "vhost_scsi", 00:03:55.949 "config": [] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "vhost_blk", 00:03:55.949 "config": [] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "ublk", 00:03:55.949 "config": [] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "nbd", 00:03:55.949 "config": [] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "nvmf", 00:03:55.949 "config": [ 00:03:55.949 { 00:03:55.949 "method": "nvmf_set_config", 00:03:55.949 "params": { 00:03:55.949 "discovery_filter": "match_any", 00:03:55.949 "admin_cmd_passthru": { 00:03:55.949 "identify_ctrlr": false 00:03:55.949 }, 00:03:55.949 "dhchap_digests": [ 00:03:55.949 "sha256", 00:03:55.949 "sha384", 00:03:55.949 "sha512" 00:03:55.949 ], 00:03:55.949 "dhchap_dhgroups": [ 00:03:55.949 "null", 00:03:55.949 "ffdhe2048", 00:03:55.949 "ffdhe3072", 00:03:55.949 "ffdhe4096", 00:03:55.949 "ffdhe6144", 00:03:55.949 "ffdhe8192" 00:03:55.949 ] 00:03:55.949 } 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "method": "nvmf_set_max_subsystems", 00:03:55.949 "params": { 00:03:55.949 "max_subsystems": 1024 00:03:55.949 } 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "method": "nvmf_set_crdt", 00:03:55.949 "params": { 00:03:55.949 "crdt1": 0, 00:03:55.949 "crdt2": 0, 00:03:55.949 "crdt3": 0 00:03:55.949 } 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "method": "nvmf_create_transport", 00:03:55.949 "params": { 00:03:55.949 "trtype": "TCP", 00:03:55.949 "max_queue_depth": 128, 00:03:55.949 "max_io_qpairs_per_ctrlr": 127, 00:03:55.949 "in_capsule_data_size": 4096, 00:03:55.949 "max_io_size": 131072, 00:03:55.949 "io_unit_size": 131072, 00:03:55.949 "max_aq_depth": 128, 00:03:55.949 "num_shared_buffers": 511, 00:03:55.949 "buf_cache_size": 4294967295, 00:03:55.949 "dif_insert_or_strip": false, 00:03:55.949 "zcopy": false, 00:03:55.949 "c2h_success": true, 00:03:55.949 "sock_priority": 0, 00:03:55.949 "abort_timeout_sec": 1, 00:03:55.949 "ack_timeout": 0, 00:03:55.949 "data_wr_pool_size": 0 00:03:55.949 } 00:03:55.949 } 00:03:55.949 ] 00:03:55.949 }, 00:03:55.949 { 00:03:55.949 "subsystem": "iscsi", 00:03:55.949 "config": [ 00:03:55.949 { 00:03:55.949 "method": "iscsi_set_options", 00:03:55.949 "params": { 00:03:55.949 "node_base": "iqn.2016-06.io.spdk", 00:03:55.949 "max_sessions": 128, 00:03:55.949 "max_connections_per_session": 2, 00:03:55.949 "max_queue_depth": 64, 00:03:55.949 "default_time2wait": 2, 00:03:55.949 "default_time2retain": 20, 00:03:55.949 "first_burst_length": 8192, 00:03:55.949 "immediate_data": true, 00:03:55.949 "allow_duplicated_isid": false, 00:03:55.949 "error_recovery_level": 0, 00:03:55.949 "nop_timeout": 60, 00:03:55.949 "nop_in_interval": 30, 00:03:55.949 "disable_chap": false, 00:03:55.949 "require_chap": false, 00:03:55.949 "mutual_chap": false, 00:03:55.949 "chap_group": 0, 00:03:55.949 "max_large_datain_per_connection": 64, 00:03:55.949 "max_r2t_per_connection": 4, 00:03:55.949 "pdu_pool_size": 36864, 00:03:55.949 "immediate_data_pool_size": 16384, 00:03:55.949 "data_out_pool_size": 2048 00:03:55.949 } 00:03:55.949 } 00:03:55.949 ] 00:03:55.949 } 00:03:55.949 ] 00:03:55.949 } 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1023399 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1023399 ']' 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1023399 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1023399 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1023399' 00:03:55.949 killing process with pid 1023399 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1023399 00:03:55.949 12:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1023399 00:03:56.208 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1023545 00:03:56.208 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:56.208 12:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1023545 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1023545 ']' 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1023545 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1023545 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1023545' 00:04:01.477 killing process with pid 1023545 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1023545 00:04:01.477 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1023545 00:04:01.736 12:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.736 12:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.736 00:04:01.736 real 0m6.273s 00:04:01.736 user 0m5.976s 00:04:01.736 sys 0m0.596s 00:04:01.736 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.736 12:44:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.736 ************************************ 00:04:01.736 END TEST skip_rpc_with_json 00:04:01.736 ************************************ 00:04:01.736 12:44:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:01.736 12:44:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.736 12:44:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.736 12:44:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.737 ************************************ 00:04:01.737 START TEST skip_rpc_with_delay 00:04:01.737 ************************************ 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.737 [2024-10-15 12:44:21.957011] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:01.737 00:04:01.737 real 0m0.069s 00:04:01.737 user 0m0.040s 00:04:01.737 sys 0m0.028s 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.737 12:44:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:01.737 ************************************ 00:04:01.737 END TEST skip_rpc_with_delay 00:04:01.737 ************************************ 00:04:01.737 12:44:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:01.737 12:44:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:01.737 12:44:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:01.737 12:44:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.737 12:44:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.737 12:44:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.737 ************************************ 00:04:01.737 START TEST exit_on_failed_rpc_init 00:04:01.737 ************************************ 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1024534 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1024534 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1024534 ']' 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:01.737 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.996 [2024-10-15 12:44:22.093132] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:01.996 [2024-10-15 12:44:22.093174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024534 ] 00:04:01.996 [2024-10-15 12:44:22.160492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.996 [2024-10-15 12:44:22.200099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:02.256 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.256 [2024-10-15 12:44:22.485345] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:02.256 [2024-10-15 12:44:22.485388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024634 ] 00:04:02.256 [2024-10-15 12:44:22.554013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.516 [2024-10-15 12:44:22.595708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.516 [2024-10-15 12:44:22.595767] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:02.516 [2024-10-15 12:44:22.595776] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:02.516 [2024-10-15 12:44:22.595782] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1024534 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1024534 ']' 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1024534 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1024534 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1024534' 00:04:02.516 killing process with pid 1024534 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1024534 00:04:02.516 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1024534 00:04:02.775 00:04:02.775 real 0m0.943s 00:04:02.775 user 0m0.987s 00:04:02.775 sys 0m0.399s 00:04:02.775 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.775 12:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.775 ************************************ 00:04:02.775 END TEST exit_on_failed_rpc_init 00:04:02.775 ************************************ 00:04:02.775 12:44:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.775 00:04:02.775 real 0m13.105s 00:04:02.775 user 0m12.331s 00:04:02.775 sys 0m1.581s 00:04:02.775 12:44:23 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.775 12:44:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.775 ************************************ 00:04:02.775 END TEST skip_rpc 00:04:02.775 ************************************ 00:04:02.775 12:44:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.775 12:44:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.775 12:44:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.775 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:02.775 ************************************ 00:04:02.775 START TEST rpc_client 00:04:02.775 ************************************ 00:04:02.775 12:44:23 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:03.035 * Looking for test storage... 00:04:03.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.035 12:44:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.035 12:44:23 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.035 --rc genhtml_branch_coverage=1 00:04:03.035 --rc genhtml_function_coverage=1 00:04:03.035 --rc genhtml_legend=1 00:04:03.035 --rc geninfo_all_blocks=1 00:04:03.035 --rc geninfo_unexecuted_blocks=1 00:04:03.035 00:04:03.035 ' 00:04:03.036 12:44:23 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.036 --rc genhtml_branch_coverage=1 00:04:03.036 --rc genhtml_function_coverage=1 00:04:03.036 --rc genhtml_legend=1 00:04:03.036 --rc geninfo_all_blocks=1 00:04:03.036 --rc geninfo_unexecuted_blocks=1 00:04:03.036 00:04:03.036 ' 00:04:03.036 12:44:23 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.036 --rc genhtml_branch_coverage=1 00:04:03.036 --rc genhtml_function_coverage=1 00:04:03.036 --rc genhtml_legend=1 00:04:03.036 --rc geninfo_all_blocks=1 00:04:03.036 --rc geninfo_unexecuted_blocks=1 00:04:03.036 00:04:03.036 ' 00:04:03.036 12:44:23 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.036 --rc genhtml_branch_coverage=1 00:04:03.036 --rc genhtml_function_coverage=1 00:04:03.036 --rc genhtml_legend=1 00:04:03.036 --rc geninfo_all_blocks=1 00:04:03.036 --rc geninfo_unexecuted_blocks=1 00:04:03.036 00:04:03.036 ' 00:04:03.036 12:44:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:03.036 OK 00:04:03.036 12:44:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:03.036 00:04:03.036 real 0m0.198s 00:04:03.036 user 0m0.117s 00:04:03.036 sys 0m0.093s 00:04:03.036 12:44:23 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.036 12:44:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:03.036 ************************************ 00:04:03.036 END TEST rpc_client 00:04:03.036 ************************************ 00:04:03.036 12:44:23 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:03.036 12:44:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.036 12:44:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.036 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:03.296 ************************************ 00:04:03.296 START TEST json_config 00:04:03.296 ************************************ 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.296 12:44:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.296 12:44:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.296 12:44:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.296 12:44:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.296 12:44:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.296 12:44:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:03.296 12:44:23 json_config -- scripts/common.sh@345 -- # : 1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.296 12:44:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.296 12:44:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@353 -- # local d=1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.296 12:44:23 json_config -- scripts/common.sh@355 -- # echo 1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.296 12:44:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@353 -- # local d=2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.296 12:44:23 json_config -- scripts/common.sh@355 -- # echo 2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.296 12:44:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.296 12:44:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.296 12:44:23 json_config -- scripts/common.sh@368 -- # return 0 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.296 12:44:23 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.296 --rc genhtml_branch_coverage=1 00:04:03.296 --rc genhtml_function_coverage=1 00:04:03.296 --rc genhtml_legend=1 00:04:03.296 --rc geninfo_all_blocks=1 00:04:03.296 --rc geninfo_unexecuted_blocks=1 00:04:03.296 00:04:03.297 ' 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.297 --rc genhtml_branch_coverage=1 00:04:03.297 --rc genhtml_function_coverage=1 00:04:03.297 --rc genhtml_legend=1 00:04:03.297 --rc geninfo_all_blocks=1 00:04:03.297 --rc geninfo_unexecuted_blocks=1 00:04:03.297 00:04:03.297 ' 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.297 --rc genhtml_branch_coverage=1 00:04:03.297 --rc genhtml_function_coverage=1 00:04:03.297 --rc genhtml_legend=1 00:04:03.297 --rc geninfo_all_blocks=1 00:04:03.297 --rc geninfo_unexecuted_blocks=1 00:04:03.297 00:04:03.297 ' 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.297 --rc genhtml_branch_coverage=1 00:04:03.297 --rc genhtml_function_coverage=1 00:04:03.297 --rc genhtml_legend=1 00:04:03.297 --rc geninfo_all_blocks=1 00:04:03.297 --rc geninfo_unexecuted_blocks=1 00:04:03.297 00:04:03.297 ' 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.297 12:44:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.297 12:44:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.297 12:44:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.297 12:44:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.297 12:44:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 12:44:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 12:44:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 12:44:23 json_config -- paths/export.sh@5 -- # export PATH 00:04:03.297 12:44:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@51 -- # : 0 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.297 12:44:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:03.297 INFO: JSON configuration test init 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.297 12:44:23 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:03.297 12:44:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.297 12:44:23 json_config -- json_config/common.sh@10 -- # shift 00:04:03.297 12:44:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.297 12:44:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.297 12:44:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.297 12:44:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.297 12:44:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.297 12:44:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1024898 00:04:03.297 12:44:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.297 Waiting for target to run... 00:04:03.297 12:44:23 json_config -- json_config/common.sh@25 -- # waitforlisten 1024898 /var/tmp/spdk_tgt.sock 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 1024898 ']' 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.297 12:44:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.297 12:44:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.297 [2024-10-15 12:44:23.606686] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:03.297 [2024-10-15 12:44:23.606734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024898 ] 00:04:03.866 [2024-10-15 12:44:23.892622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.866 [2024-10-15 12:44:23.926937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.126 12:44:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.126 12:44:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:04.126 12:44:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.126 00:04:04.126 12:44:24 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:04.126 12:44:24 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:04.126 12:44:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.126 12:44:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.127 12:44:24 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:04.127 12:44:24 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:04.127 12:44:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.127 12:44:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.386 12:44:24 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:04.386 12:44:24 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:04.386 12:44:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:07.674 12:44:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:07.675 12:44:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@54 -- # sort 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.675 12:44:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:07.675 12:44:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.675 12:44:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.675 MallocForNvmf0 00:04:07.934 12:44:28 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.934 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.934 MallocForNvmf1 00:04:07.934 12:44:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.934 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.193 [2024-10-15 12:44:28.364763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.193 12:44:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.193 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.452 12:44:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.452 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.452 12:44:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.452 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.712 12:44:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.712 12:44:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.971 [2024-10-15 12:44:29.066958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.971 12:44:29 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:08.971 12:44:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.971 12:44:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.971 12:44:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:08.971 12:44:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.971 12:44:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.971 12:44:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:08.971 12:44:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.971 12:44:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.231 MallocBdevForConfigChangeCheck 00:04:09.231 12:44:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:09.231 12:44:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.231 12:44:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.231 12:44:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:09.231 12:44:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.489 12:44:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:09.490 INFO: shutting down applications... 00:04:09.490 12:44:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:09.490 12:44:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:09.490 12:44:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:09.490 12:44:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:12.026 Calling clear_iscsi_subsystem 00:04:12.026 Calling clear_nvmf_subsystem 00:04:12.026 Calling clear_nbd_subsystem 00:04:12.026 Calling clear_ublk_subsystem 00:04:12.026 Calling clear_vhost_blk_subsystem 00:04:12.026 Calling clear_vhost_scsi_subsystem 00:04:12.026 Calling clear_bdev_subsystem 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:12.026 12:44:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:12.026 12:44:32 json_config -- json_config/json_config.sh@352 -- # break 00:04:12.026 12:44:32 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:12.026 12:44:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:12.026 12:44:32 json_config -- json_config/common.sh@31 -- # local app=target 00:04:12.026 12:44:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.026 12:44:32 json_config -- json_config/common.sh@35 -- # [[ -n 1024898 ]] 00:04:12.026 12:44:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1024898 00:04:12.026 12:44:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.026 12:44:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.026 12:44:32 json_config -- json_config/common.sh@41 -- # kill -0 1024898 00:04:12.026 12:44:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.595 12:44:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.595 12:44:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.595 12:44:32 json_config -- json_config/common.sh@41 -- # kill -0 1024898 00:04:12.595 12:44:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.595 12:44:32 json_config -- json_config/common.sh@43 -- # break 00:04:12.595 12:44:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.595 12:44:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.595 SPDK target shutdown done 00:04:12.595 12:44:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:12.595 INFO: relaunching applications... 00:04:12.595 12:44:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.595 12:44:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.595 12:44:32 json_config -- json_config/common.sh@10 -- # shift 00:04:12.595 12:44:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.595 12:44:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.595 12:44:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.595 12:44:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.595 12:44:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.595 12:44:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1026632 00:04:12.595 12:44:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.595 12:44:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.595 Waiting for target to run... 00:04:12.595 12:44:32 json_config -- json_config/common.sh@25 -- # waitforlisten 1026632 /var/tmp/spdk_tgt.sock 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 1026632 ']' 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.595 12:44:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.595 [2024-10-15 12:44:32.843120] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:12.595 [2024-10-15 12:44:32.843176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026632 ] 00:04:12.854 [2024-10-15 12:44:33.132406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.855 [2024-10-15 12:44:33.166140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.147 [2024-10-15 12:44:36.196210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.147 [2024-10-15 12:44:36.228565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.147 12:44:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.147 12:44:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:16.147 12:44:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.147 00:04:16.147 12:44:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:16.147 12:44:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:16.147 INFO: Checking if target configuration is the same... 00:04:16.147 12:44:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.147 12:44:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:16.147 12:44:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.147 + '[' 2 -ne 2 ']' 00:04:16.147 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.147 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.147 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.147 +++ basename /dev/fd/62 00:04:16.147 ++ mktemp /tmp/62.XXX 00:04:16.147 + tmp_file_1=/tmp/62.YdD 00:04:16.147 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.147 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.147 + tmp_file_2=/tmp/spdk_tgt_config.json.76B 00:04:16.147 + ret=0 00:04:16.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.406 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.406 + diff -u /tmp/62.YdD /tmp/spdk_tgt_config.json.76B 00:04:16.406 + echo 'INFO: JSON config files are the same' 00:04:16.406 INFO: JSON config files are the same 00:04:16.406 + rm /tmp/62.YdD /tmp/spdk_tgt_config.json.76B 00:04:16.406 + exit 0 00:04:16.406 12:44:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:16.406 12:44:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:16.406 INFO: changing configuration and checking if this can be detected... 00:04:16.406 12:44:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.406 12:44:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.664 12:44:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.664 12:44:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:16.664 12:44:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.664 + '[' 2 -ne 2 ']' 00:04:16.664 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.664 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.664 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.664 +++ basename /dev/fd/62 00:04:16.664 ++ mktemp /tmp/62.XXX 00:04:16.664 + tmp_file_1=/tmp/62.cuC 00:04:16.664 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.664 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.664 + tmp_file_2=/tmp/spdk_tgt_config.json.zyJ 00:04:16.664 + ret=0 00:04:16.664 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.921 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.178 + diff -u /tmp/62.cuC /tmp/spdk_tgt_config.json.zyJ 00:04:17.178 + ret=1 00:04:17.178 + echo '=== Start of file: /tmp/62.cuC ===' 00:04:17.178 + cat /tmp/62.cuC 00:04:17.178 + echo '=== End of file: /tmp/62.cuC ===' 00:04:17.178 + echo '' 00:04:17.178 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zyJ ===' 00:04:17.178 + cat /tmp/spdk_tgt_config.json.zyJ 00:04:17.178 + echo '=== End of file: /tmp/spdk_tgt_config.json.zyJ ===' 00:04:17.178 + echo '' 00:04:17.178 + rm /tmp/62.cuC /tmp/spdk_tgt_config.json.zyJ 00:04:17.178 + exit 1 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:17.178 INFO: configuration change detected. 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 1026632 ]] 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.178 12:44:37 json_config -- json_config/json_config.sh@330 -- # killprocess 1026632 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@950 -- # '[' -z 1026632 ']' 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@954 -- # kill -0 1026632 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@955 -- # uname 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1026632 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1026632' 00:04:17.178 killing process with pid 1026632 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@969 -- # kill 1026632 00:04:17.178 12:44:37 json_config -- common/autotest_common.sh@974 -- # wait 1026632 00:04:19.707 12:44:39 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.707 12:44:39 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:19.707 12:44:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.707 12:44:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.707 12:44:39 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:19.707 12:44:39 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:19.707 INFO: Success 00:04:19.707 00:04:19.707 real 0m16.098s 00:04:19.707 user 0m16.600s 00:04:19.707 sys 0m2.396s 00:04:19.707 12:44:39 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.707 12:44:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.707 ************************************ 00:04:19.707 END TEST json_config 00:04:19.707 ************************************ 00:04:19.707 12:44:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.707 12:44:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.707 12:44:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.707 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.707 ************************************ 00:04:19.707 START TEST json_config_extra_key 00:04:19.707 ************************************ 00:04:19.707 12:44:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.707 12:44:39 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.707 12:44:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.707 12:44:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.707 12:44:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.707 12:44:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.708 --rc genhtml_branch_coverage=1 00:04:19.708 --rc genhtml_function_coverage=1 00:04:19.708 --rc genhtml_legend=1 00:04:19.708 --rc geninfo_all_blocks=1 00:04:19.708 --rc geninfo_unexecuted_blocks=1 00:04:19.708 00:04:19.708 ' 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.708 --rc genhtml_branch_coverage=1 00:04:19.708 --rc genhtml_function_coverage=1 00:04:19.708 --rc genhtml_legend=1 00:04:19.708 --rc geninfo_all_blocks=1 00:04:19.708 --rc geninfo_unexecuted_blocks=1 00:04:19.708 00:04:19.708 ' 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.708 --rc genhtml_branch_coverage=1 00:04:19.708 --rc genhtml_function_coverage=1 00:04:19.708 --rc genhtml_legend=1 00:04:19.708 --rc geninfo_all_blocks=1 00:04:19.708 --rc geninfo_unexecuted_blocks=1 00:04:19.708 00:04:19.708 ' 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.708 --rc genhtml_branch_coverage=1 00:04:19.708 --rc genhtml_function_coverage=1 00:04:19.708 --rc genhtml_legend=1 00:04:19.708 --rc geninfo_all_blocks=1 00:04:19.708 --rc geninfo_unexecuted_blocks=1 00:04:19.708 00:04:19.708 ' 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.708 12:44:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.708 12:44:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.708 12:44:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.708 12:44:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.708 12:44:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.708 12:44:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.708 12:44:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.708 12:44:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:19.708 12:44:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.708 12:44:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:19.708 INFO: launching applications... 00:04:19.708 12:44:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1027907 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.708 Waiting for target to run... 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1027907 /var/tmp/spdk_tgt.sock 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1027907 ']' 00:04:19.708 12:44:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:19.708 12:44:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.708 [2024-10-15 12:44:39.764884] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:19.708 [2024-10-15 12:44:39.764932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027907 ] 00:04:19.967 [2024-10-15 12:44:40.041994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.967 [2024-10-15 12:44:40.078528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.534 12:44:40 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.534 12:44:40 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:20.534 00:04:20.534 12:44:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:20.534 INFO: shutting down applications... 00:04:20.534 12:44:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1027907 ]] 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1027907 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1027907 00:04:20.534 12:44:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.793 12:44:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1027907 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.052 12:44:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.052 SPDK target shutdown done 00:04:21.052 12:44:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:21.053 Success 00:04:21.053 00:04:21.053 real 0m1.590s 00:04:21.053 user 0m1.402s 00:04:21.053 sys 0m0.378s 00:04:21.053 12:44:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.053 12:44:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.053 ************************************ 00:04:21.053 END TEST json_config_extra_key 00:04:21.053 ************************************ 00:04:21.053 12:44:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.053 12:44:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.053 12:44:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.053 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:04:21.053 ************************************ 00:04:21.053 START TEST alias_rpc 00:04:21.053 ************************************ 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.053 * Looking for test storage... 00:04:21.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.053 12:44:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.053 --rc genhtml_branch_coverage=1 00:04:21.053 --rc genhtml_function_coverage=1 00:04:21.053 --rc genhtml_legend=1 00:04:21.053 --rc geninfo_all_blocks=1 00:04:21.053 --rc geninfo_unexecuted_blocks=1 00:04:21.053 00:04:21.053 ' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.053 --rc genhtml_branch_coverage=1 00:04:21.053 --rc genhtml_function_coverage=1 00:04:21.053 --rc genhtml_legend=1 00:04:21.053 --rc geninfo_all_blocks=1 00:04:21.053 --rc geninfo_unexecuted_blocks=1 00:04:21.053 00:04:21.053 ' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.053 --rc genhtml_branch_coverage=1 00:04:21.053 --rc genhtml_function_coverage=1 00:04:21.053 --rc genhtml_legend=1 00:04:21.053 --rc geninfo_all_blocks=1 00:04:21.053 --rc geninfo_unexecuted_blocks=1 00:04:21.053 00:04:21.053 ' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.053 --rc genhtml_branch_coverage=1 00:04:21.053 --rc genhtml_function_coverage=1 00:04:21.053 --rc genhtml_legend=1 00:04:21.053 --rc geninfo_all_blocks=1 00:04:21.053 --rc geninfo_unexecuted_blocks=1 00:04:21.053 00:04:21.053 ' 00:04:21.053 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.053 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1028202 00:04:21.053 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.053 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1028202 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1028202 ']' 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.053 12:44:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.313 [2024-10-15 12:44:41.421341] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:21.313 [2024-10-15 12:44:41.421384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028202 ] 00:04:21.313 [2024-10-15 12:44:41.487132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.313 [2024-10-15 12:44:41.529357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.572 12:44:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.572 12:44:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:21.572 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:21.831 12:44:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1028202 00:04:21.831 12:44:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1028202 ']' 00:04:21.831 12:44:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1028202 00:04:21.831 12:44:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:21.831 12:44:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.831 12:44:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028202 00:04:21.831 12:44:42 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.831 12:44:42 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.831 12:44:42 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028202' 00:04:21.831 killing process with pid 1028202 00:04:21.831 12:44:42 alias_rpc -- common/autotest_common.sh@969 -- # kill 1028202 00:04:21.831 12:44:42 alias_rpc -- common/autotest_common.sh@974 -- # wait 1028202 00:04:22.089 00:04:22.089 real 0m1.140s 00:04:22.089 user 0m1.150s 00:04:22.089 sys 0m0.428s 00:04:22.089 12:44:42 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.089 12:44:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.089 ************************************ 00:04:22.089 END TEST alias_rpc 00:04:22.089 ************************************ 00:04:22.089 12:44:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:22.089 12:44:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.089 12:44:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.089 12:44:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.089 12:44:42 -- common/autotest_common.sh@10 -- # set +x 00:04:22.089 ************************************ 00:04:22.089 START TEST spdkcli_tcp 00:04:22.089 ************************************ 00:04:22.089 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.348 * Looking for test storage... 00:04:22.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:22.348 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.348 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.348 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.348 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.348 12:44:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.349 12:44:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.349 --rc genhtml_branch_coverage=1 00:04:22.349 --rc genhtml_function_coverage=1 00:04:22.349 --rc genhtml_legend=1 00:04:22.349 --rc geninfo_all_blocks=1 00:04:22.349 --rc geninfo_unexecuted_blocks=1 00:04:22.349 00:04:22.349 ' 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.349 --rc genhtml_branch_coverage=1 00:04:22.349 --rc genhtml_function_coverage=1 00:04:22.349 --rc genhtml_legend=1 00:04:22.349 --rc geninfo_all_blocks=1 00:04:22.349 --rc geninfo_unexecuted_blocks=1 00:04:22.349 00:04:22.349 ' 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.349 --rc genhtml_branch_coverage=1 00:04:22.349 --rc genhtml_function_coverage=1 00:04:22.349 --rc genhtml_legend=1 00:04:22.349 --rc geninfo_all_blocks=1 00:04:22.349 --rc geninfo_unexecuted_blocks=1 00:04:22.349 00:04:22.349 ' 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.349 --rc genhtml_branch_coverage=1 00:04:22.349 --rc genhtml_function_coverage=1 00:04:22.349 --rc genhtml_legend=1 00:04:22.349 --rc geninfo_all_blocks=1 00:04:22.349 --rc geninfo_unexecuted_blocks=1 00:04:22.349 00:04:22.349 ' 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1028492 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1028492 00:04:22.349 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1028492 ']' 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.349 12:44:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.349 [2024-10-15 12:44:42.634841] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:22.349 [2024-10-15 12:44:42.634888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028492 ] 00:04:22.608 [2024-10-15 12:44:42.701676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.608 [2024-10-15 12:44:42.744871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.608 [2024-10-15 12:44:42.744874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.868 12:44:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.868 12:44:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:22.868 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1028537 00:04:22.868 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:22.868 12:44:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:22.868 [ 00:04:22.868 "bdev_malloc_delete", 00:04:22.868 "bdev_malloc_create", 00:04:22.868 "bdev_null_resize", 00:04:22.868 "bdev_null_delete", 00:04:22.868 "bdev_null_create", 00:04:22.868 "bdev_nvme_cuse_unregister", 00:04:22.868 "bdev_nvme_cuse_register", 00:04:22.868 "bdev_opal_new_user", 00:04:22.868 "bdev_opal_set_lock_state", 00:04:22.868 "bdev_opal_delete", 00:04:22.868 "bdev_opal_get_info", 00:04:22.868 "bdev_opal_create", 00:04:22.868 "bdev_nvme_opal_revert", 00:04:22.868 "bdev_nvme_opal_init", 00:04:22.868 "bdev_nvme_send_cmd", 00:04:22.868 "bdev_nvme_set_keys", 00:04:22.868 "bdev_nvme_get_path_iostat", 00:04:22.868 "bdev_nvme_get_mdns_discovery_info", 00:04:22.868 "bdev_nvme_stop_mdns_discovery", 00:04:22.868 "bdev_nvme_start_mdns_discovery", 00:04:22.868 "bdev_nvme_set_multipath_policy", 00:04:22.868 "bdev_nvme_set_preferred_path", 00:04:22.868 "bdev_nvme_get_io_paths", 00:04:22.868 "bdev_nvme_remove_error_injection", 00:04:22.868 "bdev_nvme_add_error_injection", 00:04:22.868 "bdev_nvme_get_discovery_info", 00:04:22.868 "bdev_nvme_stop_discovery", 00:04:22.868 "bdev_nvme_start_discovery", 00:04:22.868 "bdev_nvme_get_controller_health_info", 00:04:22.868 "bdev_nvme_disable_controller", 00:04:22.868 "bdev_nvme_enable_controller", 00:04:22.868 "bdev_nvme_reset_controller", 00:04:22.868 "bdev_nvme_get_transport_statistics", 00:04:22.868 "bdev_nvme_apply_firmware", 00:04:22.868 "bdev_nvme_detach_controller", 00:04:22.868 "bdev_nvme_get_controllers", 00:04:22.868 "bdev_nvme_attach_controller", 00:04:22.868 "bdev_nvme_set_hotplug", 00:04:22.868 "bdev_nvme_set_options", 00:04:22.868 "bdev_passthru_delete", 00:04:22.868 "bdev_passthru_create", 00:04:22.868 "bdev_lvol_set_parent_bdev", 00:04:22.868 "bdev_lvol_set_parent", 00:04:22.868 "bdev_lvol_check_shallow_copy", 00:04:22.868 "bdev_lvol_start_shallow_copy", 00:04:22.868 "bdev_lvol_grow_lvstore", 00:04:22.868 "bdev_lvol_get_lvols", 00:04:22.868 "bdev_lvol_get_lvstores", 00:04:22.868 "bdev_lvol_delete", 00:04:22.868 "bdev_lvol_set_read_only", 00:04:22.868 "bdev_lvol_resize", 00:04:22.868 "bdev_lvol_decouple_parent", 00:04:22.868 "bdev_lvol_inflate", 00:04:22.868 "bdev_lvol_rename", 00:04:22.868 "bdev_lvol_clone_bdev", 00:04:22.868 "bdev_lvol_clone", 00:04:22.868 "bdev_lvol_snapshot", 00:04:22.868 "bdev_lvol_create", 00:04:22.868 "bdev_lvol_delete_lvstore", 00:04:22.868 "bdev_lvol_rename_lvstore", 00:04:22.868 "bdev_lvol_create_lvstore", 00:04:22.868 "bdev_raid_set_options", 00:04:22.868 "bdev_raid_remove_base_bdev", 00:04:22.868 "bdev_raid_add_base_bdev", 00:04:22.868 "bdev_raid_delete", 00:04:22.868 "bdev_raid_create", 00:04:22.868 "bdev_raid_get_bdevs", 00:04:22.868 "bdev_error_inject_error", 00:04:22.868 "bdev_error_delete", 00:04:22.868 "bdev_error_create", 00:04:22.868 "bdev_split_delete", 00:04:22.868 "bdev_split_create", 00:04:22.868 "bdev_delay_delete", 00:04:22.868 "bdev_delay_create", 00:04:22.868 "bdev_delay_update_latency", 00:04:22.868 "bdev_zone_block_delete", 00:04:22.868 "bdev_zone_block_create", 00:04:22.868 "blobfs_create", 00:04:22.868 "blobfs_detect", 00:04:22.868 "blobfs_set_cache_size", 00:04:22.868 "bdev_aio_delete", 00:04:22.868 "bdev_aio_rescan", 00:04:22.868 "bdev_aio_create", 00:04:22.868 "bdev_ftl_set_property", 00:04:22.868 "bdev_ftl_get_properties", 00:04:22.868 "bdev_ftl_get_stats", 00:04:22.868 "bdev_ftl_unmap", 00:04:22.868 "bdev_ftl_unload", 00:04:22.868 "bdev_ftl_delete", 00:04:22.868 "bdev_ftl_load", 00:04:22.868 "bdev_ftl_create", 00:04:22.868 "bdev_virtio_attach_controller", 00:04:22.868 "bdev_virtio_scsi_get_devices", 00:04:22.868 "bdev_virtio_detach_controller", 00:04:22.868 "bdev_virtio_blk_set_hotplug", 00:04:22.868 "bdev_iscsi_delete", 00:04:22.868 "bdev_iscsi_create", 00:04:22.868 "bdev_iscsi_set_options", 00:04:22.868 "accel_error_inject_error", 00:04:22.868 "ioat_scan_accel_module", 00:04:22.868 "dsa_scan_accel_module", 00:04:22.868 "iaa_scan_accel_module", 00:04:22.868 "vfu_virtio_create_fs_endpoint", 00:04:22.868 "vfu_virtio_create_scsi_endpoint", 00:04:22.868 "vfu_virtio_scsi_remove_target", 00:04:22.868 "vfu_virtio_scsi_add_target", 00:04:22.868 "vfu_virtio_create_blk_endpoint", 00:04:22.868 "vfu_virtio_delete_endpoint", 00:04:22.868 "keyring_file_remove_key", 00:04:22.868 "keyring_file_add_key", 00:04:22.868 "keyring_linux_set_options", 00:04:22.868 "fsdev_aio_delete", 00:04:22.868 "fsdev_aio_create", 00:04:22.868 "iscsi_get_histogram", 00:04:22.868 "iscsi_enable_histogram", 00:04:22.868 "iscsi_set_options", 00:04:22.868 "iscsi_get_auth_groups", 00:04:22.868 "iscsi_auth_group_remove_secret", 00:04:22.868 "iscsi_auth_group_add_secret", 00:04:22.868 "iscsi_delete_auth_group", 00:04:22.868 "iscsi_create_auth_group", 00:04:22.868 "iscsi_set_discovery_auth", 00:04:22.868 "iscsi_get_options", 00:04:22.868 "iscsi_target_node_request_logout", 00:04:22.868 "iscsi_target_node_set_redirect", 00:04:22.868 "iscsi_target_node_set_auth", 00:04:22.868 "iscsi_target_node_add_lun", 00:04:22.868 "iscsi_get_stats", 00:04:22.868 "iscsi_get_connections", 00:04:22.868 "iscsi_portal_group_set_auth", 00:04:22.868 "iscsi_start_portal_group", 00:04:22.868 "iscsi_delete_portal_group", 00:04:22.868 "iscsi_create_portal_group", 00:04:22.868 "iscsi_get_portal_groups", 00:04:22.868 "iscsi_delete_target_node", 00:04:22.868 "iscsi_target_node_remove_pg_ig_maps", 00:04:22.868 "iscsi_target_node_add_pg_ig_maps", 00:04:22.868 "iscsi_create_target_node", 00:04:22.868 "iscsi_get_target_nodes", 00:04:22.868 "iscsi_delete_initiator_group", 00:04:22.868 "iscsi_initiator_group_remove_initiators", 00:04:22.868 "iscsi_initiator_group_add_initiators", 00:04:22.868 "iscsi_create_initiator_group", 00:04:22.868 "iscsi_get_initiator_groups", 00:04:22.868 "nvmf_set_crdt", 00:04:22.868 "nvmf_set_config", 00:04:22.868 "nvmf_set_max_subsystems", 00:04:22.868 "nvmf_stop_mdns_prr", 00:04:22.868 "nvmf_publish_mdns_prr", 00:04:22.868 "nvmf_subsystem_get_listeners", 00:04:22.868 "nvmf_subsystem_get_qpairs", 00:04:22.868 "nvmf_subsystem_get_controllers", 00:04:22.868 "nvmf_get_stats", 00:04:22.868 "nvmf_get_transports", 00:04:22.868 "nvmf_create_transport", 00:04:22.868 "nvmf_get_targets", 00:04:22.868 "nvmf_delete_target", 00:04:22.868 "nvmf_create_target", 00:04:22.868 "nvmf_subsystem_allow_any_host", 00:04:22.868 "nvmf_subsystem_set_keys", 00:04:22.868 "nvmf_subsystem_remove_host", 00:04:22.868 "nvmf_subsystem_add_host", 00:04:22.868 "nvmf_ns_remove_host", 00:04:22.868 "nvmf_ns_add_host", 00:04:22.868 "nvmf_subsystem_remove_ns", 00:04:22.868 "nvmf_subsystem_set_ns_ana_group", 00:04:22.868 "nvmf_subsystem_add_ns", 00:04:22.868 "nvmf_subsystem_listener_set_ana_state", 00:04:22.868 "nvmf_discovery_get_referrals", 00:04:22.868 "nvmf_discovery_remove_referral", 00:04:22.868 "nvmf_discovery_add_referral", 00:04:22.868 "nvmf_subsystem_remove_listener", 00:04:22.868 "nvmf_subsystem_add_listener", 00:04:22.868 "nvmf_delete_subsystem", 00:04:22.868 "nvmf_create_subsystem", 00:04:22.868 "nvmf_get_subsystems", 00:04:22.868 "env_dpdk_get_mem_stats", 00:04:22.868 "nbd_get_disks", 00:04:22.869 "nbd_stop_disk", 00:04:22.869 "nbd_start_disk", 00:04:22.869 "ublk_recover_disk", 00:04:22.869 "ublk_get_disks", 00:04:22.869 "ublk_stop_disk", 00:04:22.869 "ublk_start_disk", 00:04:22.869 "ublk_destroy_target", 00:04:22.869 "ublk_create_target", 00:04:22.869 "virtio_blk_create_transport", 00:04:22.869 "virtio_blk_get_transports", 00:04:22.869 "vhost_controller_set_coalescing", 00:04:22.869 "vhost_get_controllers", 00:04:22.869 "vhost_delete_controller", 00:04:22.869 "vhost_create_blk_controller", 00:04:22.869 "vhost_scsi_controller_remove_target", 00:04:22.869 "vhost_scsi_controller_add_target", 00:04:22.869 "vhost_start_scsi_controller", 00:04:22.869 "vhost_create_scsi_controller", 00:04:22.869 "thread_set_cpumask", 00:04:22.869 "scheduler_set_options", 00:04:22.869 "framework_get_governor", 00:04:22.869 "framework_get_scheduler", 00:04:22.869 "framework_set_scheduler", 00:04:22.869 "framework_get_reactors", 00:04:22.869 "thread_get_io_channels", 00:04:22.869 "thread_get_pollers", 00:04:22.869 "thread_get_stats", 00:04:22.869 "framework_monitor_context_switch", 00:04:22.869 "spdk_kill_instance", 00:04:22.869 "log_enable_timestamps", 00:04:22.869 "log_get_flags", 00:04:22.869 "log_clear_flag", 00:04:22.869 "log_set_flag", 00:04:22.869 "log_get_level", 00:04:22.869 "log_set_level", 00:04:22.869 "log_get_print_level", 00:04:22.869 "log_set_print_level", 00:04:22.869 "framework_enable_cpumask_locks", 00:04:22.869 "framework_disable_cpumask_locks", 00:04:22.869 "framework_wait_init", 00:04:22.869 "framework_start_init", 00:04:22.869 "scsi_get_devices", 00:04:22.869 "bdev_get_histogram", 00:04:22.869 "bdev_enable_histogram", 00:04:22.869 "bdev_set_qos_limit", 00:04:22.869 "bdev_set_qd_sampling_period", 00:04:22.869 "bdev_get_bdevs", 00:04:22.869 "bdev_reset_iostat", 00:04:22.869 "bdev_get_iostat", 00:04:22.869 "bdev_examine", 00:04:22.869 "bdev_wait_for_examine", 00:04:22.869 "bdev_set_options", 00:04:22.869 "accel_get_stats", 00:04:22.869 "accel_set_options", 00:04:22.869 "accel_set_driver", 00:04:22.869 "accel_crypto_key_destroy", 00:04:22.869 "accel_crypto_keys_get", 00:04:22.869 "accel_crypto_key_create", 00:04:22.869 "accel_assign_opc", 00:04:22.869 "accel_get_module_info", 00:04:22.869 "accel_get_opc_assignments", 00:04:22.869 "vmd_rescan", 00:04:22.869 "vmd_remove_device", 00:04:22.869 "vmd_enable", 00:04:22.869 "sock_get_default_impl", 00:04:22.869 "sock_set_default_impl", 00:04:22.869 "sock_impl_set_options", 00:04:22.869 "sock_impl_get_options", 00:04:22.869 "iobuf_get_stats", 00:04:22.869 "iobuf_set_options", 00:04:22.869 "keyring_get_keys", 00:04:22.869 "vfu_tgt_set_base_path", 00:04:22.869 "framework_get_pci_devices", 00:04:22.869 "framework_get_config", 00:04:22.869 "framework_get_subsystems", 00:04:22.869 "fsdev_set_opts", 00:04:22.869 "fsdev_get_opts", 00:04:22.869 "trace_get_info", 00:04:22.869 "trace_get_tpoint_group_mask", 00:04:22.869 "trace_disable_tpoint_group", 00:04:22.869 "trace_enable_tpoint_group", 00:04:22.869 "trace_clear_tpoint_mask", 00:04:22.869 "trace_set_tpoint_mask", 00:04:22.869 "notify_get_notifications", 00:04:22.869 "notify_get_types", 00:04:22.869 "spdk_get_version", 00:04:22.869 "rpc_get_methods" 00:04:22.869 ] 00:04:22.869 12:44:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:22.869 12:44:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.869 12:44:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.128 12:44:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.128 12:44:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1028492 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1028492 ']' 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1028492 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028492 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028492' 00:04:23.128 killing process with pid 1028492 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1028492 00:04:23.128 12:44:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1028492 00:04:23.387 00:04:23.387 real 0m1.148s 00:04:23.387 user 0m1.957s 00:04:23.387 sys 0m0.440s 00:04:23.387 12:44:43 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.387 12:44:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.387 ************************************ 00:04:23.387 END TEST spdkcli_tcp 00:04:23.387 ************************************ 00:04:23.387 12:44:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.387 12:44:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.387 12:44:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.387 12:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:23.387 ************************************ 00:04:23.387 START TEST dpdk_mem_utility 00:04:23.387 ************************************ 00:04:23.387 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.387 * Looking for test storage... 00:04:23.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:23.387 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.647 12:44:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.647 --rc genhtml_branch_coverage=1 00:04:23.647 --rc genhtml_function_coverage=1 00:04:23.647 --rc genhtml_legend=1 00:04:23.647 --rc geninfo_all_blocks=1 00:04:23.647 --rc geninfo_unexecuted_blocks=1 00:04:23.647 00:04:23.647 ' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.647 --rc genhtml_branch_coverage=1 00:04:23.647 --rc genhtml_function_coverage=1 00:04:23.647 --rc genhtml_legend=1 00:04:23.647 --rc geninfo_all_blocks=1 00:04:23.647 --rc geninfo_unexecuted_blocks=1 00:04:23.647 00:04:23.647 ' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.647 --rc genhtml_branch_coverage=1 00:04:23.647 --rc genhtml_function_coverage=1 00:04:23.647 --rc genhtml_legend=1 00:04:23.647 --rc geninfo_all_blocks=1 00:04:23.647 --rc geninfo_unexecuted_blocks=1 00:04:23.647 00:04:23.647 ' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.647 --rc genhtml_branch_coverage=1 00:04:23.647 --rc genhtml_function_coverage=1 00:04:23.647 --rc genhtml_legend=1 00:04:23.647 --rc geninfo_all_blocks=1 00:04:23.647 --rc geninfo_unexecuted_blocks=1 00:04:23.647 00:04:23.647 ' 00:04:23.647 12:44:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:23.647 12:44:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1028797 00:04:23.647 12:44:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1028797 00:04:23.647 12:44:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1028797 ']' 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.647 12:44:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.647 [2024-10-15 12:44:43.847061] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:23.647 [2024-10-15 12:44:43.847110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028797 ] 00:04:23.647 [2024-10-15 12:44:43.914903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.647 [2024-10-15 12:44:43.954163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.907 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.907 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:23.907 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:23.907 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:23.907 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.907 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.907 { 00:04:23.907 "filename": "/tmp/spdk_mem_dump.txt" 00:04:23.907 } 00:04:23.907 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.907 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.167 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:24.167 1 heaps totaling size 810.000000 MiB 00:04:24.167 size: 810.000000 MiB heap id: 0 00:04:24.167 end heaps---------- 00:04:24.167 9 mempools totaling size 595.772034 MiB 00:04:24.167 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.167 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.167 size: 92.545471 MiB name: bdev_io_1028797 00:04:24.167 size: 50.003479 MiB name: msgpool_1028797 00:04:24.167 size: 36.509338 MiB name: fsdev_io_1028797 00:04:24.167 size: 21.763794 MiB name: PDU_Pool 00:04:24.167 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.167 size: 4.133484 MiB name: evtpool_1028797 00:04:24.167 size: 0.026123 MiB name: Session_Pool 00:04:24.167 end mempools------- 00:04:24.167 6 memzones totaling size 4.142822 MiB 00:04:24.167 size: 1.000366 MiB name: RG_ring_0_1028797 00:04:24.167 size: 1.000366 MiB name: RG_ring_1_1028797 00:04:24.167 size: 1.000366 MiB name: RG_ring_4_1028797 00:04:24.167 size: 1.000366 MiB name: RG_ring_5_1028797 00:04:24.167 size: 0.125366 MiB name: RG_ring_2_1028797 00:04:24.167 size: 0.015991 MiB name: RG_ring_3_1028797 00:04:24.167 end memzones------- 00:04:24.167 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.167 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:24.167 list of free elements. size: 10.862488 MiB 00:04:24.167 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:24.167 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:24.167 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:24.167 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:24.167 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:24.167 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:24.167 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:24.167 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:24.167 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:24.167 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:24.167 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:24.167 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:24.167 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:24.167 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:24.167 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:24.167 list of standard malloc elements. size: 199.218628 MiB 00:04:24.167 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:24.167 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:24.167 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:24.167 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:24.167 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:24.167 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.167 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:24.167 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.167 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:24.167 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:24.167 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:24.167 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:24.167 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:24.168 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:24.168 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:24.168 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:24.168 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:24.168 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:24.168 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:24.168 list of memzone associated elements. size: 599.918884 MiB 00:04:24.168 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:24.168 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.168 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:24.168 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.168 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:24.168 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1028797_0 00:04:24.168 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:24.168 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1028797_0 00:04:24.168 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:24.168 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1028797_0 00:04:24.168 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:24.168 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.168 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:24.168 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.168 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:24.168 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1028797_0 00:04:24.168 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:24.168 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1028797 00:04:24.168 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.168 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1028797 00:04:24.168 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:24.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.168 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:24.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.168 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:24.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.168 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:24.168 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.168 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:24.168 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1028797 00:04:24.168 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:24.168 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1028797 00:04:24.168 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:24.168 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1028797 00:04:24.168 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:24.168 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1028797 00:04:24.168 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:24.168 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1028797 00:04:24.168 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:24.168 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1028797 00:04:24.168 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:24.168 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.168 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:24.168 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.168 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:24.168 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.168 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:24.168 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1028797 00:04:24.168 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:24.168 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1028797 00:04:24.168 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:24.168 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.168 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:24.168 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.168 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:24.168 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1028797 00:04:24.168 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:24.168 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.168 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:24.168 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1028797 00:04:24.168 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:24.168 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1028797 00:04:24.168 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:24.168 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1028797 00:04:24.168 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:24.168 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.168 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.168 12:44:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1028797 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1028797 ']' 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1028797 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028797 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028797' 00:04:24.168 killing process with pid 1028797 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1028797 00:04:24.168 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1028797 00:04:24.428 00:04:24.428 real 0m1.013s 00:04:24.428 user 0m0.967s 00:04:24.428 sys 0m0.392s 00:04:24.428 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.428 12:44:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.428 ************************************ 00:04:24.428 END TEST dpdk_mem_utility 00:04:24.428 ************************************ 00:04:24.428 12:44:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:24.428 12:44:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.428 12:44:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.428 12:44:44 -- common/autotest_common.sh@10 -- # set +x 00:04:24.428 ************************************ 00:04:24.428 START TEST event 00:04:24.428 ************************************ 00:04:24.428 12:44:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:24.688 * Looking for test storage... 00:04:24.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.688 12:44:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.688 12:44:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.688 12:44:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.688 12:44:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.688 12:44:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.688 12:44:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.688 12:44:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.688 12:44:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.688 12:44:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.688 12:44:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.688 12:44:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.688 12:44:44 event -- scripts/common.sh@344 -- # case "$op" in 00:04:24.688 12:44:44 event -- scripts/common.sh@345 -- # : 1 00:04:24.688 12:44:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.688 12:44:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.688 12:44:44 event -- scripts/common.sh@365 -- # decimal 1 00:04:24.688 12:44:44 event -- scripts/common.sh@353 -- # local d=1 00:04:24.688 12:44:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.688 12:44:44 event -- scripts/common.sh@355 -- # echo 1 00:04:24.688 12:44:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.688 12:44:44 event -- scripts/common.sh@366 -- # decimal 2 00:04:24.688 12:44:44 event -- scripts/common.sh@353 -- # local d=2 00:04:24.688 12:44:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.688 12:44:44 event -- scripts/common.sh@355 -- # echo 2 00:04:24.688 12:44:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.688 12:44:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.688 12:44:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.688 12:44:44 event -- scripts/common.sh@368 -- # return 0 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.688 --rc genhtml_branch_coverage=1 00:04:24.688 --rc genhtml_function_coverage=1 00:04:24.688 --rc genhtml_legend=1 00:04:24.688 --rc geninfo_all_blocks=1 00:04:24.688 --rc geninfo_unexecuted_blocks=1 00:04:24.688 00:04:24.688 ' 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.688 --rc genhtml_branch_coverage=1 00:04:24.688 --rc genhtml_function_coverage=1 00:04:24.688 --rc genhtml_legend=1 00:04:24.688 --rc geninfo_all_blocks=1 00:04:24.688 --rc geninfo_unexecuted_blocks=1 00:04:24.688 00:04:24.688 ' 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.688 --rc genhtml_branch_coverage=1 00:04:24.688 --rc genhtml_function_coverage=1 00:04:24.688 --rc genhtml_legend=1 00:04:24.688 --rc geninfo_all_blocks=1 00:04:24.688 --rc geninfo_unexecuted_blocks=1 00:04:24.688 00:04:24.688 ' 00:04:24.688 12:44:44 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.688 --rc genhtml_branch_coverage=1 00:04:24.688 --rc genhtml_function_coverage=1 00:04:24.688 --rc genhtml_legend=1 00:04:24.688 --rc geninfo_all_blocks=1 00:04:24.688 --rc geninfo_unexecuted_blocks=1 00:04:24.688 00:04:24.688 ' 00:04:24.689 12:44:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:24.689 12:44:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:24.689 12:44:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:24.689 12:44:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:24.689 12:44:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.689 12:44:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.689 ************************************ 00:04:24.689 START TEST event_perf 00:04:24.689 ************************************ 00:04:24.689 12:44:44 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:24.689 Running I/O for 1 seconds...[2024-10-15 12:44:44.925031] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:24.689 [2024-10-15 12:44:44.925098] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029089 ] 00:04:24.689 [2024-10-15 12:44:44.993765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.948 [2024-10-15 12:44:45.037809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.948 [2024-10-15 12:44:45.037920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.948 [2024-10-15 12:44:45.038024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.948 [2024-10-15 12:44:45.038025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.884 Running I/O for 1 seconds... 00:04:25.884 lcore 0: 202466 00:04:25.884 lcore 1: 202465 00:04:25.884 lcore 2: 202465 00:04:25.884 lcore 3: 202466 00:04:25.884 done. 00:04:25.884 00:04:25.884 real 0m1.169s 00:04:25.885 user 0m4.089s 00:04:25.885 sys 0m0.076s 00:04:25.885 12:44:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.885 12:44:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:25.885 ************************************ 00:04:25.885 END TEST event_perf 00:04:25.885 ************************************ 00:04:25.885 12:44:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:25.885 12:44:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:25.885 12:44:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.885 12:44:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.885 ************************************ 00:04:25.885 START TEST event_reactor 00:04:25.885 ************************************ 00:04:25.885 12:44:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:25.885 [2024-10-15 12:44:46.172054] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:25.885 [2024-10-15 12:44:46.172123] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029345 ] 00:04:26.144 [2024-10-15 12:44:46.245237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.144 [2024-10-15 12:44:46.286142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.083 test_start 00:04:27.083 oneshot 00:04:27.083 tick 100 00:04:27.083 tick 100 00:04:27.083 tick 250 00:04:27.083 tick 100 00:04:27.083 tick 100 00:04:27.083 tick 250 00:04:27.083 tick 100 00:04:27.083 tick 500 00:04:27.083 tick 100 00:04:27.083 tick 100 00:04:27.083 tick 250 00:04:27.083 tick 100 00:04:27.083 tick 100 00:04:27.083 test_end 00:04:27.083 00:04:27.083 real 0m1.176s 00:04:27.083 user 0m1.093s 00:04:27.083 sys 0m0.080s 00:04:27.083 12:44:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.083 12:44:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:27.083 ************************************ 00:04:27.083 END TEST event_reactor 00:04:27.083 ************************************ 00:04:27.083 12:44:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.083 12:44:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:27.083 12:44:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.083 12:44:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.083 ************************************ 00:04:27.083 START TEST event_reactor_perf 00:04:27.083 ************************************ 00:04:27.083 12:44:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.343 [2024-10-15 12:44:47.416474] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:27.343 [2024-10-15 12:44:47.416538] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029591 ] 00:04:27.343 [2024-10-15 12:44:47.490977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.343 [2024-10-15 12:44:47.531414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.280 test_start 00:04:28.280 test_end 00:04:28.280 Performance: 515277 events per second 00:04:28.280 00:04:28.280 real 0m1.176s 00:04:28.280 user 0m1.102s 00:04:28.280 sys 0m0.070s 00:04:28.280 12:44:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.280 12:44:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 ************************************ 00:04:28.280 END TEST event_reactor_perf 00:04:28.280 ************************************ 00:04:28.541 12:44:48 event -- event/event.sh@49 -- # uname -s 00:04:28.541 12:44:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:28.541 12:44:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:28.541 12:44:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.541 12:44:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.541 12:44:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.541 ************************************ 00:04:28.541 START TEST event_scheduler 00:04:28.541 ************************************ 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:28.541 * Looking for test storage... 00:04:28.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.541 12:44:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.541 --rc genhtml_branch_coverage=1 00:04:28.541 --rc genhtml_function_coverage=1 00:04:28.541 --rc genhtml_legend=1 00:04:28.541 --rc geninfo_all_blocks=1 00:04:28.541 --rc geninfo_unexecuted_blocks=1 00:04:28.541 00:04:28.541 ' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.541 --rc genhtml_branch_coverage=1 00:04:28.541 --rc genhtml_function_coverage=1 00:04:28.541 --rc genhtml_legend=1 00:04:28.541 --rc geninfo_all_blocks=1 00:04:28.541 --rc geninfo_unexecuted_blocks=1 00:04:28.541 00:04:28.541 ' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.541 --rc genhtml_branch_coverage=1 00:04:28.541 --rc genhtml_function_coverage=1 00:04:28.541 --rc genhtml_legend=1 00:04:28.541 --rc geninfo_all_blocks=1 00:04:28.541 --rc geninfo_unexecuted_blocks=1 00:04:28.541 00:04:28.541 ' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.541 --rc genhtml_branch_coverage=1 00:04:28.541 --rc genhtml_function_coverage=1 00:04:28.541 --rc genhtml_legend=1 00:04:28.541 --rc geninfo_all_blocks=1 00:04:28.541 --rc geninfo_unexecuted_blocks=1 00:04:28.541 00:04:28.541 ' 00:04:28.541 12:44:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:28.541 12:44:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1029873 00:04:28.541 12:44:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.541 12:44:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:28.541 12:44:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1029873 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1029873 ']' 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.541 12:44:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.801 [2024-10-15 12:44:48.872276] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:28.801 [2024-10-15 12:44:48.872318] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029873 ] 00:04:28.801 [2024-10-15 12:44:48.939139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:28.801 [2024-10-15 12:44:48.983201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.801 [2024-10-15 12:44:48.983308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.801 [2024-10-15 12:44:48.983415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:28.801 [2024-10-15 12:44:48.983416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:28.801 12:44:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.801 12:44:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:28.801 12:44:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.802 [2024-10-15 12:44:49.039947] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:28.802 [2024-10-15 12:44:49.039968] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:28.802 [2024-10-15 12:44:49.039978] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:28.802 [2024-10-15 12:44:49.039983] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:28.802 [2024-10-15 12:44:49.039989] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.802 12:44:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.802 [2024-10-15 12:44:49.112749] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.802 12:44:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.802 12:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 ************************************ 00:04:29.062 START TEST scheduler_create_thread 00:04:29.062 ************************************ 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 2 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.062 3 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.062 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 4 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 5 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 6 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 7 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 8 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 9 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 10 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.063 12:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.001 12:44:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.001 12:44:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:30.001 12:44:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.001 12:44:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.375 12:44:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.375 12:44:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.375 12:44:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.375 12:44:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.375 12:44:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.311 12:44:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.311 00:04:32.311 real 0m3.382s 00:04:32.311 user 0m0.023s 00:04:32.311 sys 0m0.007s 00:04:32.311 12:44:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.311 12:44:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.311 ************************************ 00:04:32.311 END TEST scheduler_create_thread 00:04:32.311 ************************************ 00:04:32.311 12:44:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:32.311 12:44:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1029873 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1029873 ']' 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1029873 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029873 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029873' 00:04:32.311 killing process with pid 1029873 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1029873 00:04:32.311 12:44:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1029873 00:04:32.880 [2024-10-15 12:44:52.913056] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:32.880 00:04:32.880 real 0m4.471s 00:04:32.880 user 0m7.845s 00:04:32.880 sys 0m0.384s 00:04:32.880 12:44:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.880 12:44:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.880 ************************************ 00:04:32.880 END TEST event_scheduler 00:04:32.880 ************************************ 00:04:32.880 12:44:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:32.880 12:44:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:32.880 12:44:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.880 12:44:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.880 12:44:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.880 ************************************ 00:04:32.880 START TEST app_repeat 00:04:32.880 ************************************ 00:04:32.880 12:44:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:32.880 12:44:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.138 12:44:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1030616 00:04:33.138 12:44:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.138 12:44:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.138 12:44:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1030616' 00:04:33.138 Process app_repeat pid: 1030616 00:04:33.138 12:44:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.139 12:44:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.139 spdk_app_start Round 0 00:04:33.139 12:44:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1030616 /var/tmp/spdk-nbd.sock 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1030616 ']' 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.139 [2024-10-15 12:44:53.234153] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:33.139 [2024-10-15 12:44:53.234207] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030616 ] 00:04:33.139 [2024-10-15 12:44:53.305116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.139 [2024-10-15 12:44:53.346653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.139 [2024-10-15 12:44:53.346653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.139 12:44:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:33.139 12:44:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.398 Malloc0 00:04:33.398 12:44:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.656 Malloc1 00:04:33.656 12:44:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.656 12:44:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.915 /dev/nbd0 00:04:33.915 12:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.915 12:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.915 1+0 records in 00:04:33.915 1+0 records out 00:04:33.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196645 s, 20.8 MB/s 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:33.915 12:44:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:33.915 12:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.915 12:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.915 12:44:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.173 /dev/nbd1 00:04:34.173 12:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.173 12:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.174 1+0 records in 00:04:34.174 1+0 records out 00:04:34.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198486 s, 20.6 MB/s 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:34.174 12:44:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:34.174 12:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.174 12:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.174 12:44:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.174 12:44:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.174 12:44:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.432 { 00:04:34.432 "nbd_device": "/dev/nbd0", 00:04:34.432 "bdev_name": "Malloc0" 00:04:34.432 }, 00:04:34.432 { 00:04:34.432 "nbd_device": "/dev/nbd1", 00:04:34.432 "bdev_name": "Malloc1" 00:04:34.432 } 00:04:34.432 ]' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.432 { 00:04:34.432 "nbd_device": "/dev/nbd0", 00:04:34.432 "bdev_name": "Malloc0" 00:04:34.432 }, 00:04:34.432 { 00:04:34.432 "nbd_device": "/dev/nbd1", 00:04:34.432 "bdev_name": "Malloc1" 00:04:34.432 } 00:04:34.432 ]' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.432 /dev/nbd1' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.432 /dev/nbd1' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.432 256+0 records in 00:04:34.432 256+0 records out 00:04:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108174 s, 96.9 MB/s 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.432 256+0 records in 00:04:34.432 256+0 records out 00:04:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135131 s, 77.6 MB/s 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.432 256+0 records in 00:04:34.432 256+0 records out 00:04:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146384 s, 71.6 MB/s 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.432 12:44:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.433 12:44:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.433 12:44:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.433 12:44:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.433 12:44:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.691 12:44:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.950 12:44:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.209 12:44:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.209 12:44:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.470 12:44:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.470 [2024-10-15 12:44:55.692123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.470 [2024-10-15 12:44:55.728323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.470 [2024-10-15 12:44:55.728323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.470 [2024-10-15 12:44:55.768592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.470 [2024-10-15 12:44:55.768631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.788 12:44:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.788 12:44:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.788 spdk_app_start Round 1 00:04:38.788 12:44:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1030616 /var/tmp/spdk-nbd.sock 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1030616 ']' 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.788 12:44:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:38.788 12:44:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.788 Malloc0 00:04:38.788 12:44:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.120 Malloc1 00:04:39.120 12:44:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.120 /dev/nbd0 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.120 1+0 records in 00:04:39.120 1+0 records out 00:04:39.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187527 s, 21.8 MB/s 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:39.120 12:44:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.120 12:44:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.430 /dev/nbd1 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.430 1+0 records in 00:04:39.430 1+0 records out 00:04:39.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188593 s, 21.7 MB/s 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:39.430 12:44:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.430 12:44:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.690 { 00:04:39.690 "nbd_device": "/dev/nbd0", 00:04:39.690 "bdev_name": "Malloc0" 00:04:39.690 }, 00:04:39.690 { 00:04:39.690 "nbd_device": "/dev/nbd1", 00:04:39.690 "bdev_name": "Malloc1" 00:04:39.690 } 00:04:39.690 ]' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.690 { 00:04:39.690 "nbd_device": "/dev/nbd0", 00:04:39.690 "bdev_name": "Malloc0" 00:04:39.690 }, 00:04:39.690 { 00:04:39.690 "nbd_device": "/dev/nbd1", 00:04:39.690 "bdev_name": "Malloc1" 00:04:39.690 } 00:04:39.690 ]' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.690 /dev/nbd1' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.690 /dev/nbd1' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.690 256+0 records in 00:04:39.690 256+0 records out 00:04:39.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102136 s, 103 MB/s 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.690 256+0 records in 00:04:39.690 256+0 records out 00:04:39.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136319 s, 76.9 MB/s 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.690 256+0 records in 00:04:39.690 256+0 records out 00:04:39.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144385 s, 72.6 MB/s 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.690 12:44:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.949 12:45:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.208 12:45:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.467 12:45:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.467 12:45:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.726 12:45:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.726 [2024-10-15 12:45:01.005846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.726 [2024-10-15 12:45:01.045546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.726 [2024-10-15 12:45:01.045546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.985 [2024-10-15 12:45:01.086808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.985 [2024-10-15 12:45:01.086847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.272 12:45:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.272 12:45:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.273 spdk_app_start Round 2 00:04:44.273 12:45:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1030616 /var/tmp/spdk-nbd.sock 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1030616 ']' 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.273 12:45:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.273 12:45:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.273 12:45:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:44.273 12:45:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.273 Malloc0 00:04:44.273 12:45:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.273 Malloc1 00:04:44.273 12:45:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.273 12:45:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.531 /dev/nbd0 00:04:44.531 12:45:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.531 12:45:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.531 1+0 records in 00:04:44.531 1+0 records out 00:04:44.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022894 s, 17.9 MB/s 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.531 12:45:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.532 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.532 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.532 12:45:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.790 /dev/nbd1 00:04:44.790 12:45:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.790 12:45:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.790 12:45:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:44.790 12:45:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.790 12:45:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.790 12:45:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.791 1+0 records in 00:04:44.791 1+0 records out 00:04:44.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237856 s, 17.2 MB/s 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.791 12:45:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.791 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.791 12:45:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.791 12:45:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.791 12:45:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.791 12:45:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.049 { 00:04:45.049 "nbd_device": "/dev/nbd0", 00:04:45.049 "bdev_name": "Malloc0" 00:04:45.049 }, 00:04:45.049 { 00:04:45.049 "nbd_device": "/dev/nbd1", 00:04:45.049 "bdev_name": "Malloc1" 00:04:45.049 } 00:04:45.049 ]' 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.049 { 00:04:45.049 "nbd_device": "/dev/nbd0", 00:04:45.049 "bdev_name": "Malloc0" 00:04:45.049 }, 00:04:45.049 { 00:04:45.049 "nbd_device": "/dev/nbd1", 00:04:45.049 "bdev_name": "Malloc1" 00:04:45.049 } 00:04:45.049 ]' 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.049 /dev/nbd1' 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.049 /dev/nbd1' 00:04:45.049 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.050 256+0 records in 00:04:45.050 256+0 records out 00:04:45.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108435 s, 96.7 MB/s 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.050 256+0 records in 00:04:45.050 256+0 records out 00:04:45.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137228 s, 76.4 MB/s 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.050 256+0 records in 00:04:45.050 256+0 records out 00:04:45.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146401 s, 71.6 MB/s 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.050 12:45:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.309 12:45:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.568 12:45:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.827 12:45:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.827 12:45:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.086 12:45:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.086 [2024-10-15 12:45:06.306301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.086 [2024-10-15 12:45:06.343280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.086 [2024-10-15 12:45:06.343280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.086 [2024-10-15 12:45:06.383773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.086 [2024-10-15 12:45:06.383811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.372 12:45:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1030616 /var/tmp/spdk-nbd.sock 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1030616 ']' 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:49.372 12:45:09 event.app_repeat -- event/event.sh@39 -- # killprocess 1030616 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1030616 ']' 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1030616 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030616 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030616' 00:04:49.372 killing process with pid 1030616 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1030616 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1030616 00:04:49.372 spdk_app_start is called in Round 0. 00:04:49.372 Shutdown signal received, stop current app iteration 00:04:49.372 Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 reinitialization... 00:04:49.372 spdk_app_start is called in Round 1. 00:04:49.372 Shutdown signal received, stop current app iteration 00:04:49.372 Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 reinitialization... 00:04:49.372 spdk_app_start is called in Round 2. 00:04:49.372 Shutdown signal received, stop current app iteration 00:04:49.372 Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 reinitialization... 00:04:49.372 spdk_app_start is called in Round 3. 00:04:49.372 Shutdown signal received, stop current app iteration 00:04:49.372 12:45:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:49.372 12:45:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:49.372 00:04:49.372 real 0m16.364s 00:04:49.372 user 0m35.992s 00:04:49.372 sys 0m2.512s 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.372 12:45:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.372 ************************************ 00:04:49.372 END TEST app_repeat 00:04:49.372 ************************************ 00:04:49.372 12:45:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:49.372 12:45:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:49.372 12:45:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.372 12:45:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.372 12:45:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.372 ************************************ 00:04:49.372 START TEST cpu_locks 00:04:49.372 ************************************ 00:04:49.372 12:45:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:49.632 * Looking for test storage... 00:04:49.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.632 12:45:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.632 --rc genhtml_branch_coverage=1 00:04:49.632 --rc genhtml_function_coverage=1 00:04:49.632 --rc genhtml_legend=1 00:04:49.632 --rc geninfo_all_blocks=1 00:04:49.632 --rc geninfo_unexecuted_blocks=1 00:04:49.632 00:04:49.632 ' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.632 --rc genhtml_branch_coverage=1 00:04:49.632 --rc genhtml_function_coverage=1 00:04:49.632 --rc genhtml_legend=1 00:04:49.632 --rc geninfo_all_blocks=1 00:04:49.632 --rc geninfo_unexecuted_blocks=1 00:04:49.632 00:04:49.632 ' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.632 --rc genhtml_branch_coverage=1 00:04:49.632 --rc genhtml_function_coverage=1 00:04:49.632 --rc genhtml_legend=1 00:04:49.632 --rc geninfo_all_blocks=1 00:04:49.632 --rc geninfo_unexecuted_blocks=1 00:04:49.632 00:04:49.632 ' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.632 --rc genhtml_branch_coverage=1 00:04:49.632 --rc genhtml_function_coverage=1 00:04:49.632 --rc genhtml_legend=1 00:04:49.632 --rc geninfo_all_blocks=1 00:04:49.632 --rc geninfo_unexecuted_blocks=1 00:04:49.632 00:04:49.632 ' 00:04:49.632 12:45:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:49.632 12:45:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:49.632 12:45:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:49.632 12:45:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.632 12:45:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.632 ************************************ 00:04:49.632 START TEST default_locks 00:04:49.632 ************************************ 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1034132 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1034132 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1034132 ']' 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.632 12:45:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.632 [2024-10-15 12:45:09.897372] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:49.632 [2024-10-15 12:45:09.897417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034132 ] 00:04:49.891 [2024-10-15 12:45:09.965316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.892 [2024-10-15 12:45:10.007056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.176 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.176 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:50.176 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1034132 00:04:50.176 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1034132 00:04:50.176 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.436 lslocks: write error 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1034132 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1034132 ']' 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1034132 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034132 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034132' 00:04:50.436 killing process with pid 1034132 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1034132 00:04:50.436 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1034132 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1034132 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1034132 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1034132 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1034132 ']' 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1034132) - No such process 00:04:50.696 ERROR: process (pid: 1034132) is no longer running 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.696 00:04:50.696 real 0m1.072s 00:04:50.696 user 0m1.022s 00:04:50.696 sys 0m0.487s 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.696 12:45:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.696 ************************************ 00:04:50.696 END TEST default_locks 00:04:50.696 ************************************ 00:04:50.696 12:45:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:50.696 12:45:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.696 12:45:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.696 12:45:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.696 ************************************ 00:04:50.696 START TEST default_locks_via_rpc 00:04:50.696 ************************************ 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1034397 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1034397 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1034397 ']' 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.696 12:45:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.955 [2024-10-15 12:45:11.040685] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:50.955 [2024-10-15 12:45:11.040728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034397 ] 00:04:50.955 [2024-10-15 12:45:11.108542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.955 [2024-10-15 12:45:11.150483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1034397 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1034397 00:04:51.214 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1034397 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1034397 ']' 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1034397 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034397 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034397' 00:04:51.782 killing process with pid 1034397 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1034397 00:04:51.782 12:45:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1034397 00:04:52.041 00:04:52.041 real 0m1.169s 00:04:52.041 user 0m1.133s 00:04:52.041 sys 0m0.528s 00:04:52.041 12:45:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.041 12:45:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.041 ************************************ 00:04:52.041 END TEST default_locks_via_rpc 00:04:52.041 ************************************ 00:04:52.041 12:45:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:52.041 12:45:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.041 12:45:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.041 12:45:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.041 ************************************ 00:04:52.041 START TEST non_locking_app_on_locked_coremask 00:04:52.041 ************************************ 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1034654 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1034654 /var/tmp/spdk.sock 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1034654 ']' 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.041 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.041 [2024-10-15 12:45:12.275637] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:52.041 [2024-10-15 12:45:12.275678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034654 ] 00:04:52.041 [2024-10-15 12:45:12.343528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.300 [2024-10-15 12:45:12.386232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1034657 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1034657 /var/tmp/spdk2.sock 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1034657 ']' 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.300 12:45:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.560 [2024-10-15 12:45:12.649097] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:52.560 [2024-10-15 12:45:12.649141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034657 ] 00:04:52.560 [2024-10-15 12:45:12.719008] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.560 [2024-10-15 12:45:12.719030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.560 [2024-10-15 12:45:12.799216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.497 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.497 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:53.497 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1034654 00:04:53.497 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1034654 00:04:53.497 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.756 lslocks: write error 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1034654 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1034654 ']' 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1034654 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.756 12:45:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034654 00:04:53.756 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.756 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.756 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034654' 00:04:53.756 killing process with pid 1034654 00:04:53.756 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1034654 00:04:53.756 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1034654 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1034657 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1034657 ']' 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1034657 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.324 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034657 00:04:54.583 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.583 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.583 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034657' 00:04:54.583 killing process with pid 1034657 00:04:54.583 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1034657 00:04:54.583 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1034657 00:04:54.842 00:04:54.842 real 0m2.724s 00:04:54.842 user 0m2.881s 00:04:54.842 sys 0m0.904s 00:04:54.842 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.842 12:45:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.842 ************************************ 00:04:54.842 END TEST non_locking_app_on_locked_coremask 00:04:54.842 ************************************ 00:04:54.842 12:45:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:54.842 12:45:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.842 12:45:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.842 12:45:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.842 ************************************ 00:04:54.842 START TEST locking_app_on_unlocked_coremask 00:04:54.842 ************************************ 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1035151 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1035151 /var/tmp/spdk.sock 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035151 ']' 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.842 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.842 [2024-10-15 12:45:15.073475] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:54.842 [2024-10-15 12:45:15.073517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035151 ] 00:04:54.842 [2024-10-15 12:45:15.142578] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.842 [2024-10-15 12:45:15.142614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.102 [2024-10-15 12:45:15.179994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1035157 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1035157 /var/tmp/spdk2.sock 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035157 ']' 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.102 12:45:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.362 [2024-10-15 12:45:15.446124] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:55.362 [2024-10-15 12:45:15.446172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035157 ] 00:04:55.362 [2024-10-15 12:45:15.519989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.362 [2024-10-15 12:45:15.600536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.299 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.299 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:56.299 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1035157 00:04:56.299 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1035157 00:04:56.299 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.558 lslocks: write error 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1035151 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1035151 ']' 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1035151 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035151 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035151' 00:04:56.558 killing process with pid 1035151 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1035151 00:04:56.558 12:45:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1035151 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1035157 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1035157 ']' 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1035157 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.126 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035157 00:04:57.385 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.385 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.385 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035157' 00:04:57.385 killing process with pid 1035157 00:04:57.385 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1035157 00:04:57.385 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1035157 00:04:57.643 00:04:57.643 real 0m2.775s 00:04:57.643 user 0m2.943s 00:04:57.643 sys 0m0.910s 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.643 ************************************ 00:04:57.643 END TEST locking_app_on_unlocked_coremask 00:04:57.643 ************************************ 00:04:57.643 12:45:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:57.643 12:45:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.643 12:45:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.643 12:45:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.643 ************************************ 00:04:57.643 START TEST locking_app_on_locked_coremask 00:04:57.643 ************************************ 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1035649 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1035649 /var/tmp/spdk.sock 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035649 ']' 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.643 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.644 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.644 12:45:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.644 [2024-10-15 12:45:17.916802] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:57.644 [2024-10-15 12:45:17.916844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035649 ] 00:04:57.902 [2024-10-15 12:45:17.983795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.902 [2024-10-15 12:45:18.025629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1035659 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1035659 /var/tmp/spdk2.sock 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1035659 /var/tmp/spdk2.sock 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1035659 /var/tmp/spdk2.sock 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035659 ']' 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.161 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.161 [2024-10-15 12:45:18.286682] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:58.161 [2024-10-15 12:45:18.286727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035659 ] 00:04:58.161 [2024-10-15 12:45:18.357064] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1035649 has claimed it. 00:04:58.161 [2024-10-15 12:45:18.357095] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1035659) - No such process 00:04:58.730 ERROR: process (pid: 1035659) is no longer running 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1035649 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1035649 00:04:58.730 12:45:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.989 lslocks: write error 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1035649 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1035649 ']' 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1035649 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035649 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035649' 00:04:58.989 killing process with pid 1035649 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1035649 00:04:58.989 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1035649 00:04:59.249 00:04:59.249 real 0m1.676s 00:04:59.249 user 0m1.788s 00:04:59.249 sys 0m0.561s 00:04:59.249 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.249 12:45:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.249 ************************************ 00:04:59.249 END TEST locking_app_on_locked_coremask 00:04:59.249 ************************************ 00:04:59.508 12:45:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:59.508 12:45:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.508 12:45:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.508 12:45:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.508 ************************************ 00:04:59.508 START TEST locking_overlapped_coremask 00:04:59.508 ************************************ 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1035915 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1035915 /var/tmp/spdk.sock 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035915 ']' 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.508 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.508 [2024-10-15 12:45:19.661105] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:59.508 [2024-10-15 12:45:19.661149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035915 ] 00:04:59.508 [2024-10-15 12:45:19.733226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.508 [2024-10-15 12:45:19.775418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.508 [2024-10-15 12:45:19.775548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.508 [2024-10-15 12:45:19.775549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1035924 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1035924 /var/tmp/spdk2.sock 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1035924 /var/tmp/spdk2.sock 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1035924 /var/tmp/spdk2.sock 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1035924 ']' 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.768 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.769 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.769 12:45:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.769 [2024-10-15 12:45:20.045528] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:04:59.769 [2024-10-15 12:45:20.045579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035924 ] 00:05:00.027 [2024-10-15 12:45:20.125186] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1035915 has claimed it. 00:05:00.027 [2024-10-15 12:45:20.125226] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:00.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1035924) - No such process 00:05:00.595 ERROR: process (pid: 1035924) is no longer running 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1035915 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1035915 ']' 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1035915 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035915 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035915' 00:05:00.595 killing process with pid 1035915 00:05:00.595 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1035915 00:05:00.596 12:45:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1035915 00:05:00.856 00:05:00.856 real 0m1.438s 00:05:00.856 user 0m3.972s 00:05:00.856 sys 0m0.395s 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.856 ************************************ 00:05:00.856 END TEST locking_overlapped_coremask 00:05:00.856 ************************************ 00:05:00.856 12:45:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:00.856 12:45:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.856 12:45:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.856 12:45:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.856 ************************************ 00:05:00.856 START TEST locking_overlapped_coremask_via_rpc 00:05:00.856 ************************************ 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1036187 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1036187 /var/tmp/spdk.sock 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1036187 ']' 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.856 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.856 [2024-10-15 12:45:21.166714] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:00.856 [2024-10-15 12:45:21.166758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036187 ] 00:05:01.115 [2024-10-15 12:45:21.236874] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.115 [2024-10-15 12:45:21.236905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.115 [2024-10-15 12:45:21.275745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.115 [2024-10-15 12:45:21.275850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.115 [2024-10-15 12:45:21.275851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1036237 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1036237 /var/tmp/spdk2.sock 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1036237 ']' 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.374 12:45:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.374 [2024-10-15 12:45:21.543258] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:01.374 [2024-10-15 12:45:21.543313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036237 ] 00:05:01.374 [2024-10-15 12:45:21.623595] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.374 [2024-10-15 12:45:21.623633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.633 [2024-10-15 12:45:21.710595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.633 [2024-10-15 12:45:21.710713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.633 [2024-10-15 12:45:21.710713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.204 [2024-10-15 12:45:22.411673] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1036187 has claimed it. 00:05:02.204 request: 00:05:02.204 { 00:05:02.204 "method": "framework_enable_cpumask_locks", 00:05:02.204 "req_id": 1 00:05:02.204 } 00:05:02.204 Got JSON-RPC error response 00:05:02.204 response: 00:05:02.204 { 00:05:02.204 "code": -32603, 00:05:02.204 "message": "Failed to claim CPU core: 2" 00:05:02.204 } 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1036187 /var/tmp/spdk.sock 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1036187 ']' 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.204 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1036237 /var/tmp/spdk2.sock 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1036237 ']' 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.463 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.722 00:05:02.722 real 0m1.705s 00:05:02.722 user 0m0.810s 00:05:02.722 sys 0m0.148s 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.722 12:45:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.722 ************************************ 00:05:02.722 END TEST locking_overlapped_coremask_via_rpc 00:05:02.722 ************************************ 00:05:02.722 12:45:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:02.722 12:45:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1036187 ]] 00:05:02.722 12:45:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1036187 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1036187 ']' 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1036187 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036187 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036187' 00:05:02.722 killing process with pid 1036187 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1036187 00:05:02.722 12:45:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1036187 00:05:02.982 12:45:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1036237 ]] 00:05:02.982 12:45:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1036237 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1036237 ']' 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1036237 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036237 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036237' 00:05:02.982 killing process with pid 1036237 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1036237 00:05:02.982 12:45:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1036237 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1036187 ]] 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1036187 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1036187 ']' 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1036187 00:05:03.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1036187) - No such process 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1036187 is not found' 00:05:03.551 Process with pid 1036187 is not found 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1036237 ]] 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1036237 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1036237 ']' 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1036237 00:05:03.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1036237) - No such process 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1036237 is not found' 00:05:03.551 Process with pid 1036237 is not found 00:05:03.551 12:45:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.551 00:05:03.551 real 0m13.945s 00:05:03.551 user 0m24.318s 00:05:03.551 sys 0m4.902s 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.551 12:45:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 END TEST cpu_locks 00:05:03.551 ************************************ 00:05:03.551 00:05:03.551 real 0m38.916s 00:05:03.551 user 1m14.705s 00:05:03.551 sys 0m8.410s 00:05:03.551 12:45:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.551 12:45:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 END TEST event 00:05:03.551 ************************************ 00:05:03.551 12:45:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.551 12:45:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.551 12:45:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.551 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 START TEST thread 00:05:03.551 ************************************ 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.551 * Looking for test storage... 00:05:03.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.551 12:45:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.551 12:45:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.551 12:45:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.551 12:45:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.551 12:45:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.551 12:45:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.551 12:45:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.551 12:45:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.551 12:45:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.551 12:45:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.551 12:45:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.551 12:45:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:03.551 12:45:23 thread -- scripts/common.sh@345 -- # : 1 00:05:03.551 12:45:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.551 12:45:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.551 12:45:23 thread -- scripts/common.sh@365 -- # decimal 1 00:05:03.551 12:45:23 thread -- scripts/common.sh@353 -- # local d=1 00:05:03.551 12:45:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.551 12:45:23 thread -- scripts/common.sh@355 -- # echo 1 00:05:03.551 12:45:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.551 12:45:23 thread -- scripts/common.sh@366 -- # decimal 2 00:05:03.551 12:45:23 thread -- scripts/common.sh@353 -- # local d=2 00:05:03.551 12:45:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.551 12:45:23 thread -- scripts/common.sh@355 -- # echo 2 00:05:03.551 12:45:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.551 12:45:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.551 12:45:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.551 12:45:23 thread -- scripts/common.sh@368 -- # return 0 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.551 --rc genhtml_branch_coverage=1 00:05:03.551 --rc genhtml_function_coverage=1 00:05:03.551 --rc genhtml_legend=1 00:05:03.551 --rc geninfo_all_blocks=1 00:05:03.551 --rc geninfo_unexecuted_blocks=1 00:05:03.551 00:05:03.551 ' 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.551 --rc genhtml_branch_coverage=1 00:05:03.551 --rc genhtml_function_coverage=1 00:05:03.551 --rc genhtml_legend=1 00:05:03.551 --rc geninfo_all_blocks=1 00:05:03.551 --rc geninfo_unexecuted_blocks=1 00:05:03.551 00:05:03.551 ' 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.551 --rc genhtml_branch_coverage=1 00:05:03.551 --rc genhtml_function_coverage=1 00:05:03.551 --rc genhtml_legend=1 00:05:03.551 --rc geninfo_all_blocks=1 00:05:03.551 --rc geninfo_unexecuted_blocks=1 00:05:03.551 00:05:03.551 ' 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.551 --rc genhtml_branch_coverage=1 00:05:03.551 --rc genhtml_function_coverage=1 00:05:03.551 --rc genhtml_legend=1 00:05:03.551 --rc geninfo_all_blocks=1 00:05:03.551 --rc geninfo_unexecuted_blocks=1 00:05:03.551 00:05:03.551 ' 00:05:03.551 12:45:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.551 12:45:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:03.552 12:45:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.552 12:45:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.810 ************************************ 00:05:03.811 START TEST thread_poller_perf 00:05:03.811 ************************************ 00:05:03.811 12:45:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.811 [2024-10-15 12:45:23.907917] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:03.811 [2024-10-15 12:45:23.907974] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036756 ] 00:05:03.811 [2024-10-15 12:45:23.978286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.811 [2024-10-15 12:45:24.017610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.811 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:04.748 [2024-10-15T10:45:25.067Z] ====================================== 00:05:04.748 [2024-10-15T10:45:25.067Z] busy:2104635866 (cyc) 00:05:04.748 [2024-10-15T10:45:25.067Z] total_run_count: 425000 00:05:04.748 [2024-10-15T10:45:25.067Z] tsc_hz: 2100000000 (cyc) 00:05:04.748 [2024-10-15T10:45:25.067Z] ====================================== 00:05:04.748 [2024-10-15T10:45:25.067Z] poller_cost: 4952 (cyc), 2358 (nsec) 00:05:04.748 00:05:04.748 real 0m1.169s 00:05:04.748 user 0m1.088s 00:05:04.748 sys 0m0.076s 00:05:04.748 12:45:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.748 12:45:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.748 ************************************ 00:05:04.748 END TEST thread_poller_perf 00:05:04.748 ************************************ 00:05:05.007 12:45:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.007 12:45:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:05.007 12:45:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.007 12:45:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.007 ************************************ 00:05:05.007 START TEST thread_poller_perf 00:05:05.007 ************************************ 00:05:05.007 12:45:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.007 [2024-10-15 12:45:25.154278] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:05.007 [2024-10-15 12:45:25.154337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037011 ] 00:05:05.007 [2024-10-15 12:45:25.223162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.007 [2024-10-15 12:45:25.262573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.007 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:06.384 [2024-10-15T10:45:26.703Z] ====================================== 00:05:06.384 [2024-10-15T10:45:26.703Z] busy:2101573048 (cyc) 00:05:06.384 [2024-10-15T10:45:26.703Z] total_run_count: 5576000 00:05:06.384 [2024-10-15T10:45:26.703Z] tsc_hz: 2100000000 (cyc) 00:05:06.384 [2024-10-15T10:45:26.703Z] ====================================== 00:05:06.384 [2024-10-15T10:45:26.703Z] poller_cost: 376 (cyc), 179 (nsec) 00:05:06.384 00:05:06.384 real 0m1.168s 00:05:06.384 user 0m1.091s 00:05:06.384 sys 0m0.072s 00:05:06.384 12:45:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.384 12:45:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.384 ************************************ 00:05:06.384 END TEST thread_poller_perf 00:05:06.384 ************************************ 00:05:06.384 12:45:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.384 00:05:06.384 real 0m2.649s 00:05:06.384 user 0m2.341s 00:05:06.384 sys 0m0.321s 00:05:06.384 12:45:26 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.384 12:45:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.384 ************************************ 00:05:06.384 END TEST thread 00:05:06.384 ************************************ 00:05:06.384 12:45:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:06.384 12:45:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:06.384 12:45:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.384 12:45:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.384 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:06.384 ************************************ 00:05:06.384 START TEST app_cmdline 00:05:06.384 ************************************ 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:06.384 * Looking for test storage... 00:05:06.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.384 12:45:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.384 --rc genhtml_branch_coverage=1 00:05:06.384 --rc genhtml_function_coverage=1 00:05:06.384 --rc genhtml_legend=1 00:05:06.384 --rc geninfo_all_blocks=1 00:05:06.384 --rc geninfo_unexecuted_blocks=1 00:05:06.384 00:05:06.384 ' 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.384 --rc genhtml_branch_coverage=1 00:05:06.384 --rc genhtml_function_coverage=1 00:05:06.384 --rc genhtml_legend=1 00:05:06.384 --rc geninfo_all_blocks=1 00:05:06.384 --rc geninfo_unexecuted_blocks=1 00:05:06.384 00:05:06.384 ' 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.384 --rc genhtml_branch_coverage=1 00:05:06.384 --rc genhtml_function_coverage=1 00:05:06.384 --rc genhtml_legend=1 00:05:06.384 --rc geninfo_all_blocks=1 00:05:06.384 --rc geninfo_unexecuted_blocks=1 00:05:06.384 00:05:06.384 ' 00:05:06.384 12:45:26 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.385 --rc genhtml_branch_coverage=1 00:05:06.385 --rc genhtml_function_coverage=1 00:05:06.385 --rc genhtml_legend=1 00:05:06.385 --rc geninfo_all_blocks=1 00:05:06.385 --rc geninfo_unexecuted_blocks=1 00:05:06.385 00:05:06.385 ' 00:05:06.385 12:45:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:06.385 12:45:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1037305 00:05:06.385 12:45:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1037305 00:05:06.385 12:45:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1037305 ']' 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.385 12:45:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.385 [2024-10-15 12:45:26.627664] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:06.385 [2024-10-15 12:45:26.627710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037305 ] 00:05:06.385 [2024-10-15 12:45:26.695995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.643 [2024-10-15 12:45:26.738453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.643 12:45:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.643 12:45:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:06.643 12:45:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:06.902 { 00:05:06.902 "version": "SPDK v25.01-pre git sha1 96764f31c", 00:05:06.902 "fields": { 00:05:06.902 "major": 25, 00:05:06.902 "minor": 1, 00:05:06.902 "patch": 0, 00:05:06.902 "suffix": "-pre", 00:05:06.902 "commit": "96764f31c" 00:05:06.902 } 00:05:06.902 } 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:06.902 12:45:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:06.902 12:45:27 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.161 request: 00:05:07.161 { 00:05:07.161 "method": "env_dpdk_get_mem_stats", 00:05:07.161 "req_id": 1 00:05:07.161 } 00:05:07.161 Got JSON-RPC error response 00:05:07.161 response: 00:05:07.161 { 00:05:07.161 "code": -32601, 00:05:07.162 "message": "Method not found" 00:05:07.162 } 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.162 12:45:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1037305 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1037305 ']' 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1037305 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1037305 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1037305' 00:05:07.162 killing process with pid 1037305 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 1037305 00:05:07.162 12:45:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 1037305 00:05:07.421 00:05:07.421 real 0m1.292s 00:05:07.421 user 0m1.475s 00:05:07.421 sys 0m0.448s 00:05:07.421 12:45:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.421 12:45:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 ************************************ 00:05:07.421 END TEST app_cmdline 00:05:07.421 ************************************ 00:05:07.422 12:45:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:07.422 12:45:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.422 12:45:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.422 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:07.681 ************************************ 00:05:07.681 START TEST version 00:05:07.681 ************************************ 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:07.681 * Looking for test storage... 00:05:07.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.681 12:45:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.681 12:45:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.681 12:45:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.681 12:45:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.681 12:45:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.681 12:45:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.681 12:45:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.681 12:45:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.681 12:45:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.681 12:45:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.681 12:45:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.681 12:45:27 version -- scripts/common.sh@344 -- # case "$op" in 00:05:07.681 12:45:27 version -- scripts/common.sh@345 -- # : 1 00:05:07.681 12:45:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.681 12:45:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.681 12:45:27 version -- scripts/common.sh@365 -- # decimal 1 00:05:07.681 12:45:27 version -- scripts/common.sh@353 -- # local d=1 00:05:07.681 12:45:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.681 12:45:27 version -- scripts/common.sh@355 -- # echo 1 00:05:07.681 12:45:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.681 12:45:27 version -- scripts/common.sh@366 -- # decimal 2 00:05:07.681 12:45:27 version -- scripts/common.sh@353 -- # local d=2 00:05:07.681 12:45:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.681 12:45:27 version -- scripts/common.sh@355 -- # echo 2 00:05:07.681 12:45:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.681 12:45:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.681 12:45:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.681 12:45:27 version -- scripts/common.sh@368 -- # return 0 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.681 --rc genhtml_branch_coverage=1 00:05:07.681 --rc genhtml_function_coverage=1 00:05:07.681 --rc genhtml_legend=1 00:05:07.681 --rc geninfo_all_blocks=1 00:05:07.681 --rc geninfo_unexecuted_blocks=1 00:05:07.681 00:05:07.681 ' 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.681 --rc genhtml_branch_coverage=1 00:05:07.681 --rc genhtml_function_coverage=1 00:05:07.681 --rc genhtml_legend=1 00:05:07.681 --rc geninfo_all_blocks=1 00:05:07.681 --rc geninfo_unexecuted_blocks=1 00:05:07.681 00:05:07.681 ' 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.681 --rc genhtml_branch_coverage=1 00:05:07.681 --rc genhtml_function_coverage=1 00:05:07.681 --rc genhtml_legend=1 00:05:07.681 --rc geninfo_all_blocks=1 00:05:07.681 --rc geninfo_unexecuted_blocks=1 00:05:07.681 00:05:07.681 ' 00:05:07.681 12:45:27 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.681 --rc genhtml_branch_coverage=1 00:05:07.681 --rc genhtml_function_coverage=1 00:05:07.681 --rc genhtml_legend=1 00:05:07.681 --rc geninfo_all_blocks=1 00:05:07.681 --rc geninfo_unexecuted_blocks=1 00:05:07.681 00:05:07.681 ' 00:05:07.681 12:45:27 version -- app/version.sh@17 -- # get_header_version major 00:05:07.681 12:45:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # cut -f2 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.681 12:45:27 version -- app/version.sh@17 -- # major=25 00:05:07.681 12:45:27 version -- app/version.sh@18 -- # get_header_version minor 00:05:07.681 12:45:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # cut -f2 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.681 12:45:27 version -- app/version.sh@18 -- # minor=1 00:05:07.681 12:45:27 version -- app/version.sh@19 -- # get_header_version patch 00:05:07.681 12:45:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # cut -f2 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.681 12:45:27 version -- app/version.sh@19 -- # patch=0 00:05:07.681 12:45:27 version -- app/version.sh@20 -- # get_header_version suffix 00:05:07.681 12:45:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # cut -f2 00:05:07.681 12:45:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.681 12:45:27 version -- app/version.sh@20 -- # suffix=-pre 00:05:07.681 12:45:27 version -- app/version.sh@22 -- # version=25.1 00:05:07.681 12:45:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:07.681 12:45:27 version -- app/version.sh@28 -- # version=25.1rc0 00:05:07.681 12:45:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:07.681 12:45:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:07.939 12:45:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:07.939 12:45:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:07.939 00:05:07.939 real 0m0.242s 00:05:07.939 user 0m0.144s 00:05:07.939 sys 0m0.140s 00:05:07.939 12:45:28 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.940 12:45:28 version -- common/autotest_common.sh@10 -- # set +x 00:05:07.940 ************************************ 00:05:07.940 END TEST version 00:05:07.940 ************************************ 00:05:07.940 12:45:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:07.940 12:45:28 -- spdk/autotest.sh@194 -- # uname -s 00:05:07.940 12:45:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:07.940 12:45:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.940 12:45:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.940 12:45:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:07.940 12:45:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.940 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.940 12:45:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:07.940 12:45:28 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:07.940 12:45:28 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.940 12:45:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:07.940 12:45:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.940 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.940 ************************************ 00:05:07.940 START TEST nvmf_tcp 00:05:07.940 ************************************ 00:05:07.940 12:45:28 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.940 * Looking for test storage... 00:05:07.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:07.940 12:45:28 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.940 12:45:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.940 12:45:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.198 12:45:28 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.198 12:45:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.199 12:45:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:08.199 12:45:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:08.199 12:45:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.199 12:45:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.199 ************************************ 00:05:08.199 START TEST nvmf_target_core 00:05:08.199 ************************************ 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:08.199 * Looking for test storage... 00:05:08.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.199 --rc genhtml_branch_coverage=1 00:05:08.199 --rc genhtml_function_coverage=1 00:05:08.199 --rc genhtml_legend=1 00:05:08.199 --rc geninfo_all_blocks=1 00:05:08.199 --rc geninfo_unexecuted_blocks=1 00:05:08.199 00:05:08.199 ' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.199 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:08.459 ************************************ 00:05:08.459 START TEST nvmf_abort 00:05:08.459 ************************************ 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:08.459 * Looking for test storage... 00:05:08.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:08.459 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.460 --rc genhtml_branch_coverage=1 00:05:08.460 --rc genhtml_function_coverage=1 00:05:08.460 --rc genhtml_legend=1 00:05:08.460 --rc geninfo_all_blocks=1 00:05:08.460 --rc geninfo_unexecuted_blocks=1 00:05:08.460 00:05:08.460 ' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.460 --rc genhtml_branch_coverage=1 00:05:08.460 --rc genhtml_function_coverage=1 00:05:08.460 --rc genhtml_legend=1 00:05:08.460 --rc geninfo_all_blocks=1 00:05:08.460 --rc geninfo_unexecuted_blocks=1 00:05:08.460 00:05:08.460 ' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.460 --rc genhtml_branch_coverage=1 00:05:08.460 --rc genhtml_function_coverage=1 00:05:08.460 --rc genhtml_legend=1 00:05:08.460 --rc geninfo_all_blocks=1 00:05:08.460 --rc geninfo_unexecuted_blocks=1 00:05:08.460 00:05:08.460 ' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.460 --rc genhtml_branch_coverage=1 00:05:08.460 --rc genhtml_function_coverage=1 00:05:08.460 --rc genhtml_legend=1 00:05:08.460 --rc geninfo_all_blocks=1 00:05:08.460 --rc geninfo_unexecuted_blocks=1 00:05:08.460 00:05:08.460 ' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:08.460 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:08.719 12:45:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:15.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:15.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:15.291 Found net devices under 0000:86:00.0: cvl_0_0 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:15.291 Found net devices under 0000:86:00.1: cvl_0_1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:15.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:15.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:05:15.291 00:05:15.291 --- 10.0.0.2 ping statistics --- 00:05:15.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.291 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:05:15.291 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:15.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:15.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:05:15.291 00:05:15.291 --- 10.0.0.1 ping statistics --- 00:05:15.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.291 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1040986 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1040986 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1040986 ']' 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.292 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 [2024-10-15 12:45:34.911732] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:15.292 [2024-10-15 12:45:34.911777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:15.292 [2024-10-15 12:45:34.983360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.292 [2024-10-15 12:45:35.026558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:15.292 [2024-10-15 12:45:35.026594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:15.292 [2024-10-15 12:45:35.026605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.292 [2024-10-15 12:45:35.026611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.292 [2024-10-15 12:45:35.026616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:15.292 [2024-10-15 12:45:35.028062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.292 [2024-10-15 12:45:35.028169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.292 [2024-10-15 12:45:35.028170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 [2024-10-15 12:45:35.164047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 Malloc0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 Delay0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 [2024-10-15 12:45:35.234290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.292 12:45:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:15.292 [2024-10-15 12:45:35.361337] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:17.196 Initializing NVMe Controllers 00:05:17.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:17.196 controller IO queue size 128 less than required 00:05:17.196 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:17.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:17.196 Initialization complete. Launching workers. 00:05:17.196 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38036 00:05:17.196 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38097, failed to submit 62 00:05:17.196 success 38040, unsuccessful 57, failed 0 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:17.196 rmmod nvme_tcp 00:05:17.196 rmmod nvme_fabrics 00:05:17.196 rmmod nvme_keyring 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1040986 ']' 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1040986 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1040986 ']' 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1040986 00:05:17.196 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:17.197 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.197 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040986 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040986' 00:05:17.456 killing process with pid 1040986 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1040986 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1040986 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.456 12:45:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:19.995 00:05:19.995 real 0m11.225s 00:05:19.995 user 0m11.468s 00:05:19.995 sys 0m5.490s 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.995 ************************************ 00:05:19.995 END TEST nvmf_abort 00:05:19.995 ************************************ 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:19.995 ************************************ 00:05:19.995 START TEST nvmf_ns_hotplug_stress 00:05:19.995 ************************************ 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:19.995 * Looking for test storage... 00:05:19.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.995 12:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.995 --rc genhtml_branch_coverage=1 00:05:19.995 --rc genhtml_function_coverage=1 00:05:19.995 --rc genhtml_legend=1 00:05:19.995 --rc geninfo_all_blocks=1 00:05:19.995 --rc geninfo_unexecuted_blocks=1 00:05:19.995 00:05:19.995 ' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.995 --rc genhtml_branch_coverage=1 00:05:19.995 --rc genhtml_function_coverage=1 00:05:19.995 --rc genhtml_legend=1 00:05:19.995 --rc geninfo_all_blocks=1 00:05:19.995 --rc geninfo_unexecuted_blocks=1 00:05:19.995 00:05:19.995 ' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.995 --rc genhtml_branch_coverage=1 00:05:19.995 --rc genhtml_function_coverage=1 00:05:19.995 --rc genhtml_legend=1 00:05:19.995 --rc geninfo_all_blocks=1 00:05:19.995 --rc geninfo_unexecuted_blocks=1 00:05:19.995 00:05:19.995 ' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.995 --rc genhtml_branch_coverage=1 00:05:19.995 --rc genhtml_function_coverage=1 00:05:19.995 --rc genhtml_legend=1 00:05:19.995 --rc geninfo_all_blocks=1 00:05:19.995 --rc geninfo_unexecuted_blocks=1 00:05:19.995 00:05:19.995 ' 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.995 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:19.996 12:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.575 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:26.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:26.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:26.576 Found net devices under 0000:86:00.0: cvl_0_0 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:26.576 Found net devices under 0000:86:00.1: cvl_0_1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:26.576 12:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:26.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:05:26.576 00:05:26.576 --- 10.0.0.2 ping statistics --- 00:05:26.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.576 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:05:26.576 00:05:26.576 --- 10.0.0.1 ping statistics --- 00:05:26.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.576 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:26.576 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1045006 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1045006 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1045006 ']' 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.577 [2024-10-15 12:45:46.126950] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:05:26.577 [2024-10-15 12:45:46.126998] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.577 [2024-10-15 12:45:46.198155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.577 [2024-10-15 12:45:46.240217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.577 [2024-10-15 12:45:46.240252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.577 [2024-10-15 12:45:46.240262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.577 [2024-10-15 12:45:46.240268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.577 [2024-10-15 12:45:46.240273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.577 [2024-10-15 12:45:46.241574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.577 [2024-10-15 12:45:46.241684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.577 [2024-10-15 12:45:46.241684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:26.577 [2024-10-15 12:45:46.540801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:26.577 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:26.835 [2024-10-15 12:45:46.962266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.835 12:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:27.094 12:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:27.094 Malloc0 00:05:27.094 12:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:27.419 Delay0 00:05:27.419 12:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.709 12:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:27.709 NULL1 00:05:27.709 12:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:28.017 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1045447 00:05:28.017 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:28.017 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:28.017 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.275 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.276 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:28.276 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:28.534 true 00:05:28.534 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:28.534 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.796 12:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.056 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:29.056 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:29.056 true 00:05:29.056 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:29.056 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.314 Read completed with error (sct=0, sc=11) 00:05:29.314 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.573 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:29.573 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:29.831 true 00:05:29.831 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:29.831 12:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.766 12:45:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.766 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:30.766 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.025 true 00:05:31.025 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:31.025 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.284 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.284 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:31.284 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:31.543 true 00:05:31.543 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:31.543 12:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.919 12:45:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.920 12:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:32.920 12:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:33.178 true 00:05:33.178 12:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:33.178 12:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.112 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.112 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:34.112 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:34.370 true 00:05:34.370 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:34.370 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.629 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.629 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:34.629 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:34.888 true 00:05:34.888 12:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:34.888 12:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.259 12:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.259 12:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:36.259 12:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:36.517 true 00:05:36.517 12:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:36.517 12:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.452 12:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.452 12:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:37.452 12:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:37.710 true 00:05:37.710 12:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:37.710 12:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.968 12:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.226 12:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:38.226 12:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:38.226 true 00:05:38.226 12:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:38.226 12:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.605 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:39.605 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:39.864 true 00:05:39.864 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:39.864 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.799 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.799 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:40.799 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:41.058 true 00:05:41.058 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:41.058 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.316 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.316 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:41.316 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:41.574 true 00:05:41.574 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:41.574 12:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.951 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.952 12:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:42.952 12:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:43.211 true 00:05:43.211 12:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:43.211 12:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.147 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.147 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:44.147 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:44.406 true 00:05:44.406 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:44.406 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.665 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.925 12:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:44.925 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:44.925 true 00:05:44.925 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:44.925 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 12:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.301 12:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:46.301 12:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.559 true 00:05:46.559 12:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:46.559 12:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.495 12:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.495 12:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:47.495 12:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:47.753 true 00:05:47.753 12:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:47.754 12:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.011 12:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.011 12:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:48.011 12:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:48.271 true 00:05:48.271 12:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:48.271 12:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 12:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.648 12:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:49.648 12:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:49.907 true 00:05:49.907 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:49.907 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.842 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.842 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:50.842 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.100 true 00:05:51.100 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:51.100 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.100 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.359 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:51.359 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:51.618 true 00:05:51.618 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:51.618 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.812 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.812 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:53.070 true 00:05:53.070 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:53.070 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.007 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.007 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:54.007 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:54.266 true 00:05:54.266 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:54.266 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.525 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.783 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:54.783 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:54.783 true 00:05:54.783 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:54.783 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.159 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:56.159 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.417 true 00:05:56.417 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:56.417 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.352 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.352 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:57.352 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:57.611 true 00:05:57.611 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:57.611 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.869 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.127 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:58.127 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:58.127 true 00:05:58.127 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:58.127 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.502 Initializing NVMe Controllers 00:05:59.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.502 Controller IO queue size 128, less than required. 00:05:59.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.502 Controller IO queue size 128, less than required. 00:05:59.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:59.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:59.502 Initialization complete. Launching workers. 00:05:59.502 ======================================================== 00:05:59.502 Latency(us) 00:05:59.502 Device Information : IOPS MiB/s Average min max 00:05:59.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2079.29 1.02 42534.67 1357.61 1066151.05 00:05:59.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17096.04 8.35 7465.84 1601.57 442401.17 00:05:59.503 ======================================================== 00:05:59.503 Total : 19175.33 9.36 11268.55 1357.61 1066151.05 00:05:59.503 00:05:59.503 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.503 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:59.503 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:59.761 true 00:05:59.761 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045447 00:05:59.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1045447) - No such process 00:05:59.761 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1045447 00:05:59.761 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.020 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:00.279 null0 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.279 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:00.537 null1 00:06:00.537 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.537 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.537 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:00.795 null2 00:06:00.795 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.795 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.795 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:01.054 null3 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:01.054 null4 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.054 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:01.312 null5 00:06:01.312 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.312 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.312 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:01.571 null6 00:06:01.571 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.571 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.571 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:01.830 null7 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.830 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1051114 1051116 1051117 1051119 1051121 1051123 1051125 1051126 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.831 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.090 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.348 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.348 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.348 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.348 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.349 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.349 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.349 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.349 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.608 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.867 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.867 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.867 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.867 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.867 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.867 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.867 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.867 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.867 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.867 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.124 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.125 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.384 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.643 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.902 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.161 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.161 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.161 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.161 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.162 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.421 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.680 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.940 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.940 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.940 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.941 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.202 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.203 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.462 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.721 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.980 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.980 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.981 rmmod nvme_tcp 00:06:05.981 rmmod nvme_fabrics 00:06:05.981 rmmod nvme_keyring 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1045006 ']' 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1045006 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1045006 ']' 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1045006 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045006 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045006' 00:06:05.981 killing process with pid 1045006 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1045006 00:06:05.981 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1045006 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.240 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.146 00:06:08.146 real 0m48.543s 00:06:08.146 user 3m18.325s 00:06:08.146 sys 0m15.928s 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 ************************************ 00:06:08.146 END TEST nvmf_ns_hotplug_stress 00:06:08.146 ************************************ 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.146 12:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.406 ************************************ 00:06:08.406 START TEST nvmf_delete_subsystem 00:06:08.406 ************************************ 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.406 * Looking for test storage... 00:06:08.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.406 --rc genhtml_branch_coverage=1 00:06:08.406 --rc genhtml_function_coverage=1 00:06:08.406 --rc genhtml_legend=1 00:06:08.406 --rc geninfo_all_blocks=1 00:06:08.406 --rc geninfo_unexecuted_blocks=1 00:06:08.406 00:06:08.406 ' 00:06:08.406 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.406 --rc genhtml_branch_coverage=1 00:06:08.406 --rc genhtml_function_coverage=1 00:06:08.406 --rc genhtml_legend=1 00:06:08.406 --rc geninfo_all_blocks=1 00:06:08.406 --rc geninfo_unexecuted_blocks=1 00:06:08.406 00:06:08.406 ' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.407 --rc genhtml_branch_coverage=1 00:06:08.407 --rc genhtml_function_coverage=1 00:06:08.407 --rc genhtml_legend=1 00:06:08.407 --rc geninfo_all_blocks=1 00:06:08.407 --rc geninfo_unexecuted_blocks=1 00:06:08.407 00:06:08.407 ' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.407 --rc genhtml_branch_coverage=1 00:06:08.407 --rc genhtml_function_coverage=1 00:06:08.407 --rc genhtml_legend=1 00:06:08.407 --rc geninfo_all_blocks=1 00:06:08.407 --rc geninfo_unexecuted_blocks=1 00:06:08.407 00:06:08.407 ' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.407 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:15.080 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:15.080 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:15.080 Found net devices under 0000:86:00.0: cvl_0_0 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:15.080 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:15.081 Found net devices under 0000:86:00.1: cvl_0_1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:06:15.081 00:06:15.081 --- 10.0.0.2 ping statistics --- 00:06:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.081 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:15.081 00:06:15.081 --- 10.0.0.1 ping statistics --- 00:06:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.081 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1055515 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1055515 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1055515 ']' 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.081 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 [2024-10-15 12:46:34.799030] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:06:15.081 [2024-10-15 12:46:34.799073] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.081 [2024-10-15 12:46:34.873264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.081 [2024-10-15 12:46:34.914071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.081 [2024-10-15 12:46:34.914110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.081 [2024-10-15 12:46:34.914117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.081 [2024-10-15 12:46:34.914124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.081 [2024-10-15 12:46:34.914129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.081 [2024-10-15 12:46:34.915299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.081 [2024-10-15 12:46:34.915300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 [2024-10-15 12:46:35.049890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.082 [2024-10-15 12:46:35.070120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.082 NULL1 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.082 Delay0 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1055747 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:15.082 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.082 [2024-10-15 12:46:35.171846] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:16.985 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.985 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.985 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.985 Write completed with error (sct=0, sc=8) 00:06:16.985 Write completed with error (sct=0, sc=8) 00:06:16.985 Write completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 starting I/O failed: -6 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 starting I/O failed: -6 00:06:16.985 Write completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 Write completed with error (sct=0, sc=8) 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.985 starting I/O failed: -6 00:06:16.985 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 starting I/O failed: -6 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.986 starting I/O failed: -6 00:06:16.986 Read completed with error (sct=0, sc=8) 00:06:16.986 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Write completed with error (sct=0, sc=8) 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 starting I/O failed: -6 00:06:16.987 Read completed with error (sct=0, sc=8) 00:06:16.987 [2024-10-15 12:46:37.301774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2c0000c00 is same with the state(6) to be set 00:06:18.364 [2024-10-15 12:46:38.268268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793a70 is same with the state(6) to be set 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 [2024-10-15 12:46:38.302699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2c000cfe0 is same with the state(6) to be set 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 [2024-10-15 12:46:38.302960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2c000d7a0 is same with the state(6) to be set 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 [2024-10-15 12:46:38.305086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792390 is same with the state(6) to be set 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Write completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.364 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Write completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 Read completed with error (sct=0, sc=8) 00:06:18.365 [2024-10-15 12:46:38.305681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792750 is same with the state(6) to be set 00:06:18.365 Initializing NVMe Controllers 00:06:18.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.365 Controller IO queue size 128, less than required. 00:06:18.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:18.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:18.365 Initialization complete. Launching workers. 00:06:18.365 ======================================================== 00:06:18.365 Latency(us) 00:06:18.365 Device Information : IOPS MiB/s Average min max 00:06:18.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.79 0.09 897061.06 352.57 1011077.94 00:06:18.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.34 0.09 922290.71 384.28 2000147.22 00:06:18.365 ======================================================== 00:06:18.365 Total : 372.13 0.18 909423.26 352.57 2000147.22 00:06:18.365 00:06:18.365 [2024-10-15 12:46:38.305990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1793a70 (9): Bad file descriptor 00:06:18.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:18.365 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.365 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:18.365 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1055747 00:06:18.365 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1055747 00:06:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1055747) - No such process 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1055747 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1055747 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1055747 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.626 [2024-10-15 12:46:38.838569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1056230 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:18.626 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.626 [2024-10-15 12:46:38.913795] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.193 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.193 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:19.193 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.761 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.761 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:19.761 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.330 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.330 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:20.330 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.588 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.588 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:20.588 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.155 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.155 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:21.155 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.722 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.722 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:21.722 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.722 Initializing NVMe Controllers 00:06:21.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.722 Controller IO queue size 128, less than required. 00:06:21.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:21.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:21.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:21.722 Initialization complete. Launching workers. 00:06:21.722 ======================================================== 00:06:21.722 Latency(us) 00:06:21.722 Device Information : IOPS MiB/s Average min max 00:06:21.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002900.95 1000106.10 1008427.26 00:06:21.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003745.96 1000150.98 1041086.31 00:06:21.722 ======================================================== 00:06:21.722 Total : 256.00 0.12 1003323.46 1000106.10 1041086.31 00:06:21.722 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056230 00:06:22.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1056230) - No such process 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1056230 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.291 rmmod nvme_tcp 00:06:22.291 rmmod nvme_fabrics 00:06:22.291 rmmod nvme_keyring 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1055515 ']' 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1055515 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1055515 ']' 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1055515 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1055515 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1055515' 00:06:22.291 killing process with pid 1055515 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1055515 00:06:22.291 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1055515 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.550 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.455 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.455 00:06:24.455 real 0m16.255s 00:06:24.455 user 0m29.163s 00:06:24.455 sys 0m5.549s 00:06:24.455 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.455 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.455 ************************************ 00:06:24.455 END TEST nvmf_delete_subsystem 00:06:24.455 ************************************ 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.715 ************************************ 00:06:24.715 START TEST nvmf_host_management 00:06:24.715 ************************************ 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:24.715 * Looking for test storage... 00:06:24.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.715 --rc genhtml_branch_coverage=1 00:06:24.715 --rc genhtml_function_coverage=1 00:06:24.715 --rc genhtml_legend=1 00:06:24.715 --rc geninfo_all_blocks=1 00:06:24.715 --rc geninfo_unexecuted_blocks=1 00:06:24.715 00:06:24.715 ' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.715 --rc genhtml_branch_coverage=1 00:06:24.715 --rc genhtml_function_coverage=1 00:06:24.715 --rc genhtml_legend=1 00:06:24.715 --rc geninfo_all_blocks=1 00:06:24.715 --rc geninfo_unexecuted_blocks=1 00:06:24.715 00:06:24.715 ' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.715 --rc genhtml_branch_coverage=1 00:06:24.715 --rc genhtml_function_coverage=1 00:06:24.715 --rc genhtml_legend=1 00:06:24.715 --rc geninfo_all_blocks=1 00:06:24.715 --rc geninfo_unexecuted_blocks=1 00:06:24.715 00:06:24.715 ' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.715 --rc genhtml_branch_coverage=1 00:06:24.715 --rc genhtml_function_coverage=1 00:06:24.715 --rc genhtml_legend=1 00:06:24.715 --rc geninfo_all_blocks=1 00:06:24.715 --rc geninfo_unexecuted_blocks=1 00:06:24.715 00:06:24.715 ' 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.715 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:24.715 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.716 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:31.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:31.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.296 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:31.297 Found net devices under 0000:86:00.0: cvl_0_0 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:31.297 Found net devices under 0000:86:00.1: cvl_0_1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.297 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:06:31.297 00:06:31.297 --- 10.0.0.2 ping statistics --- 00:06:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.297 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:06:31.297 00:06:31.297 --- 10.0.0.1 ping statistics --- 00:06:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.297 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1060463 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1060463 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1060463 ']' 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.297 [2024-10-15 12:46:51.130502] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:06:31.297 [2024-10-15 12:46:51.130542] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.297 [2024-10-15 12:46:51.203044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.297 [2024-10-15 12:46:51.245727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.297 [2024-10-15 12:46:51.245764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.297 [2024-10-15 12:46:51.245771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.297 [2024-10-15 12:46:51.245776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.297 [2024-10-15 12:46:51.245783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.297 [2024-10-15 12:46:51.247212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.297 [2024-10-15 12:46:51.247328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.297 [2024-10-15 12:46:51.247433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.297 [2024-10-15 12:46:51.247434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.297 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 [2024-10-15 12:46:51.384001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 Malloc0 00:06:31.298 [2024-10-15 12:46:51.461904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1060525 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1060525 /var/tmp/bdevperf.sock 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1060525 ']' 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:31.298 { 00:06:31.298 "params": { 00:06:31.298 "name": "Nvme$subsystem", 00:06:31.298 "trtype": "$TEST_TRANSPORT", 00:06:31.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:31.298 "adrfam": "ipv4", 00:06:31.298 "trsvcid": "$NVMF_PORT", 00:06:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:31.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:31.298 "hdgst": ${hdgst:-false}, 00:06:31.298 "ddgst": ${ddgst:-false} 00:06:31.298 }, 00:06:31.298 "method": "bdev_nvme_attach_controller" 00:06:31.298 } 00:06:31.298 EOF 00:06:31.298 )") 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:31.298 12:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:31.298 "params": { 00:06:31.298 "name": "Nvme0", 00:06:31.298 "trtype": "tcp", 00:06:31.298 "traddr": "10.0.0.2", 00:06:31.298 "adrfam": "ipv4", 00:06:31.298 "trsvcid": "4420", 00:06:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:31.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:31.298 "hdgst": false, 00:06:31.298 "ddgst": false 00:06:31.298 }, 00:06:31.298 "method": "bdev_nvme_attach_controller" 00:06:31.298 }' 00:06:31.298 [2024-10-15 12:46:51.559513] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:06:31.298 [2024-10-15 12:46:51.559559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060525 ] 00:06:31.556 [2024-10-15 12:46:51.629674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.556 [2024-10-15 12:46:51.670827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.814 Running I/O for 10 seconds... 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=93 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 93 -ge 100 ']' 00:06:31.814 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.072 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.072 [2024-10-15 12:46:52.394343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.072 [2024-10-15 12:46:52.394547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16502c0 is same with the state(6) to be set 00:06:32.332 [2024-10-15 12:46:52.394865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.394987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.394995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.395010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.395024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.395038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.395052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.332 [2024-10-15 12:46:52.395066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.332 [2024-10-15 12:46:52.395072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.333 [2024-10-15 12:46:52.395691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.333 [2024-10-15 12:46:52.395701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.334 [2024-10-15 12:46:52.395860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.334 [2024-10-15 12:46:52.395867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1173850 is same with the state(6) to be set 00:06:32.334 [2024-10-15 12:46:52.395917] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1173850 was disconnected and freed. reset controller. 00:06:32.334 [2024-10-15 12:46:52.396833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.334 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:32.334 00:06:32.334 Latency(us) 00:06:32.334 [2024-10-15T10:46:52.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:32.334 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:32.334 Verification LBA range: start 0x0 length 0x400 00:06:32.334 Nvme0n1 : 0.40 1897.66 118.60 158.14 0.00 30314.16 3713.71 27088.21 00:06:32.334 [2024-10-15T10:46:52.653Z] =================================================================================================================== 00:06:32.334 [2024-10-15T10:46:52.653Z] Total : 1897.66 118.60 158.14 0.00 30314.16 3713.71 27088.21 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.334 [2024-10-15 12:46:52.399197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.334 [2024-10-15 12:46:52.399218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5a5c0 (9): Bad file descriptor 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.334 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:32.334 [2024-10-15 12:46:52.533778] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1060525 00:06:33.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1060525) - No such process 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:33.268 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:33.269 { 00:06:33.269 "params": { 00:06:33.269 "name": "Nvme$subsystem", 00:06:33.269 "trtype": "$TEST_TRANSPORT", 00:06:33.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.269 "adrfam": "ipv4", 00:06:33.269 "trsvcid": "$NVMF_PORT", 00:06:33.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.269 "hdgst": ${hdgst:-false}, 00:06:33.269 "ddgst": ${ddgst:-false} 00:06:33.269 }, 00:06:33.269 "method": "bdev_nvme_attach_controller" 00:06:33.269 } 00:06:33.269 EOF 00:06:33.269 )") 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:33.269 12:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:33.269 "params": { 00:06:33.269 "name": "Nvme0", 00:06:33.269 "trtype": "tcp", 00:06:33.269 "traddr": "10.0.0.2", 00:06:33.269 "adrfam": "ipv4", 00:06:33.269 "trsvcid": "4420", 00:06:33.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.269 "hdgst": false, 00:06:33.269 "ddgst": false 00:06:33.269 }, 00:06:33.269 "method": "bdev_nvme_attach_controller" 00:06:33.269 }' 00:06:33.269 [2024-10-15 12:46:53.460059] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:06:33.269 [2024-10-15 12:46:53.460105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060973 ] 00:06:33.269 [2024-10-15 12:46:53.528336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.269 [2024-10-15 12:46:53.568721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.835 Running I/O for 1 seconds... 00:06:34.768 1920.00 IOPS, 120.00 MiB/s 00:06:34.768 Latency(us) 00:06:34.768 [2024-10-15T10:46:55.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.768 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:34.768 Verification LBA range: start 0x0 length 0x400 00:06:34.768 Nvme0n1 : 1.02 1936.19 121.01 0.00 0.00 32526.41 4275.44 26838.55 00:06:34.768 [2024-10-15T10:46:55.087Z] =================================================================================================================== 00:06:34.768 [2024-10-15T10:46:55.087Z] Total : 1936.19 121.01 0.00 0.00 32526.41 4275.44 26838.55 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:34.768 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.027 rmmod nvme_tcp 00:06:35.027 rmmod nvme_fabrics 00:06:35.027 rmmod nvme_keyring 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1060463 ']' 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1060463 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1060463 ']' 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1060463 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1060463 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1060463' 00:06:35.027 killing process with pid 1060463 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1060463 00:06:35.027 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1060463 00:06:35.286 [2024-10-15 12:46:55.360965] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.286 12:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:37.191 00:06:37.191 real 0m12.643s 00:06:37.191 user 0m20.669s 00:06:37.191 sys 0m5.691s 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.191 ************************************ 00:06:37.191 END TEST nvmf_host_management 00:06:37.191 ************************************ 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.191 12:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.452 ************************************ 00:06:37.452 START TEST nvmf_lvol 00:06:37.452 ************************************ 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.452 * Looking for test storage... 00:06:37.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.452 --rc genhtml_branch_coverage=1 00:06:37.452 --rc genhtml_function_coverage=1 00:06:37.452 --rc genhtml_legend=1 00:06:37.452 --rc geninfo_all_blocks=1 00:06:37.452 --rc geninfo_unexecuted_blocks=1 00:06:37.452 00:06:37.452 ' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.452 --rc genhtml_branch_coverage=1 00:06:37.452 --rc genhtml_function_coverage=1 00:06:37.452 --rc genhtml_legend=1 00:06:37.452 --rc geninfo_all_blocks=1 00:06:37.452 --rc geninfo_unexecuted_blocks=1 00:06:37.452 00:06:37.452 ' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.452 --rc genhtml_branch_coverage=1 00:06:37.452 --rc genhtml_function_coverage=1 00:06:37.452 --rc genhtml_legend=1 00:06:37.452 --rc geninfo_all_blocks=1 00:06:37.452 --rc geninfo_unexecuted_blocks=1 00:06:37.452 00:06:37.452 ' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.452 --rc genhtml_branch_coverage=1 00:06:37.452 --rc genhtml_function_coverage=1 00:06:37.452 --rc genhtml_legend=1 00:06:37.452 --rc geninfo_all_blocks=1 00:06:37.452 --rc geninfo_unexecuted_blocks=1 00:06:37.452 00:06:37.452 ' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.452 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.453 12:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.023 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.023 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:44.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:44.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:44.024 Found net devices under 0000:86:00.0: cvl_0_0 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:44.024 Found net devices under 0000:86:00.1: cvl_0_1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:06:44.024 00:06:44.024 --- 10.0.0.2 ping statistics --- 00:06:44.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.024 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:44.024 00:06:44.024 --- 10.0.0.1 ping statistics --- 00:06:44.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.024 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1064758 00:06:44.024 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1064758 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1064758 ']' 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.025 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.025 [2024-10-15 12:47:03.837923] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:06:44.025 [2024-10-15 12:47:03.837965] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.025 [2024-10-15 12:47:03.908399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.025 [2024-10-15 12:47:03.950665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.025 [2024-10-15 12:47:03.950700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.025 [2024-10-15 12:47:03.950707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.025 [2024-10-15 12:47:03.950713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.025 [2024-10-15 12:47:03.950718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.025 [2024-10-15 12:47:03.951971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.025 [2024-10-15 12:47:03.952076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.025 [2024-10-15 12:47:03.952077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.025 [2024-10-15 12:47:04.260594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.025 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.284 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:44.284 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.542 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:44.542 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:44.801 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:45.058 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b484743a-c16c-40ef-8cdf-f0d43f28e033 00:06:45.058 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b484743a-c16c-40ef-8cdf-f0d43f28e033 lvol 20 00:06:45.059 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=936bec80-1f7d-4fc5-977f-3726ee0e806b 00:06:45.059 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.317 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 936bec80-1f7d-4fc5-977f-3726ee0e806b 00:06:45.575 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.834 [2024-10-15 12:47:05.931644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.834 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.093 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1065245 00:06:46.093 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:46.093 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:47.031 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 936bec80-1f7d-4fc5-977f-3726ee0e806b MY_SNAPSHOT 00:06:47.290 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=de30ff8f-2484-40d8-b37b-1967e00a1940 00:06:47.290 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 936bec80-1f7d-4fc5-977f-3726ee0e806b 30 00:06:47.549 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone de30ff8f-2484-40d8-b37b-1967e00a1940 MY_CLONE 00:06:47.808 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c6097d5d-78a8-4558-ad9b-6e95f24b145a 00:06:47.808 12:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c6097d5d-78a8-4558-ad9b-6e95f24b145a 00:06:48.376 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1065245 00:06:56.494 Initializing NVMe Controllers 00:06:56.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.494 Controller IO queue size 128, less than required. 00:06:56.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:56.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:56.494 Initialization complete. Launching workers. 00:06:56.494 ======================================================== 00:06:56.494 Latency(us) 00:06:56.494 Device Information : IOPS MiB/s Average min max 00:06:56.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12505.00 48.85 10237.84 1603.63 45441.46 00:06:56.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12464.30 48.69 10270.62 3022.13 38326.47 00:06:56.494 ======================================================== 00:06:56.494 Total : 24969.30 97.54 10254.20 1603.63 45441.46 00:06:56.494 00:06:56.494 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.494 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 936bec80-1f7d-4fc5-977f-3726ee0e806b 00:06:56.753 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b484743a-c16c-40ef-8cdf-f0d43f28e033 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.013 rmmod nvme_tcp 00:06:57.013 rmmod nvme_fabrics 00:06:57.013 rmmod nvme_keyring 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1064758 ']' 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1064758 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1064758 ']' 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1064758 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1064758 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1064758' 00:06:57.013 killing process with pid 1064758 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1064758 00:06:57.013 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1064758 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.272 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:59.808 00:06:59.808 real 0m22.024s 00:06:59.808 user 1m3.236s 00:06:59.808 sys 0m7.766s 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.808 ************************************ 00:06:59.808 END TEST nvmf_lvol 00:06:59.808 ************************************ 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.808 ************************************ 00:06:59.808 START TEST nvmf_lvs_grow 00:06:59.808 ************************************ 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.808 * Looking for test storage... 00:06:59.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.808 --rc genhtml_branch_coverage=1 00:06:59.808 --rc genhtml_function_coverage=1 00:06:59.808 --rc genhtml_legend=1 00:06:59.808 --rc geninfo_all_blocks=1 00:06:59.808 --rc geninfo_unexecuted_blocks=1 00:06:59.808 00:06:59.808 ' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.808 --rc genhtml_branch_coverage=1 00:06:59.808 --rc genhtml_function_coverage=1 00:06:59.808 --rc genhtml_legend=1 00:06:59.808 --rc geninfo_all_blocks=1 00:06:59.808 --rc geninfo_unexecuted_blocks=1 00:06:59.808 00:06:59.808 ' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.808 --rc genhtml_branch_coverage=1 00:06:59.808 --rc genhtml_function_coverage=1 00:06:59.808 --rc genhtml_legend=1 00:06:59.808 --rc geninfo_all_blocks=1 00:06:59.808 --rc geninfo_unexecuted_blocks=1 00:06:59.808 00:06:59.808 ' 00:06:59.808 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.808 --rc genhtml_branch_coverage=1 00:06:59.808 --rc genhtml_function_coverage=1 00:06:59.808 --rc genhtml_legend=1 00:06:59.808 --rc geninfo_all_blocks=1 00:06:59.808 --rc geninfo_unexecuted_blocks=1 00:06:59.808 00:06:59.808 ' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.809 12:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:06.391 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:06.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:06.391 Found net devices under 0000:86:00.0: cvl_0_0 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.391 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:06.392 Found net devices under 0000:86:00.1: cvl_0_1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:07:06.392 00:07:06.392 --- 10.0.0.2 ping statistics --- 00:07:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.392 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:07:06.392 00:07:06.392 --- 10.0.0.1 ping statistics --- 00:07:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.392 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1070628 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1070628 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1070628 ']' 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.392 12:47:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.392 [2024-10-15 12:47:25.957377] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:06.392 [2024-10-15 12:47:25.957428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.392 [2024-10-15 12:47:26.031542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.392 [2024-10-15 12:47:26.070687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.392 [2024-10-15 12:47:26.070721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.392 [2024-10-15 12:47:26.070728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.392 [2024-10-15 12:47:26.070735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.392 [2024-10-15 12:47:26.070743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.392 [2024-10-15 12:47:26.071287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.392 [2024-10-15 12:47:26.389988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.392 ************************************ 00:07:06.392 START TEST lvs_grow_clean 00:07:06.392 ************************************ 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.392 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:06.650 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:06.650 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:06.650 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:06.908 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:06.908 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:06.908 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 07aa57d2-3280-4576-9856-a1cd9da1b53d lvol 150 00:07:07.167 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a6ebb59-a44f-4ad5-b391-201fe8af7b80 00:07:07.167 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.167 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.167 [2024-10-15 12:47:27.421530] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.167 [2024-10-15 12:47:27.421583] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.167 true 00:07:07.167 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:07.167 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:07.426 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:07.426 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.685 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a6ebb59-a44f-4ad5-b391-201fe8af7b80 00:07:07.685 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:07.943 [2024-10-15 12:47:28.167783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.943 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1071130 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1071130 /var/tmp/bdevperf.sock 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1071130 ']' 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.202 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:08.202 [2024-10-15 12:47:28.400035] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:08.202 [2024-10-15 12:47:28.400079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071130 ] 00:07:08.202 [2024-10-15 12:47:28.466719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.202 [2024-10-15 12:47:28.506479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.460 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.460 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:08.460 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:08.722 Nvme0n1 00:07:08.722 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:08.980 [ 00:07:08.980 { 00:07:08.980 "name": "Nvme0n1", 00:07:08.980 "aliases": [ 00:07:08.980 "5a6ebb59-a44f-4ad5-b391-201fe8af7b80" 00:07:08.980 ], 00:07:08.980 "product_name": "NVMe disk", 00:07:08.980 "block_size": 4096, 00:07:08.980 "num_blocks": 38912, 00:07:08.980 "uuid": "5a6ebb59-a44f-4ad5-b391-201fe8af7b80", 00:07:08.980 "numa_id": 1, 00:07:08.980 "assigned_rate_limits": { 00:07:08.980 "rw_ios_per_sec": 0, 00:07:08.980 "rw_mbytes_per_sec": 0, 00:07:08.980 "r_mbytes_per_sec": 0, 00:07:08.980 "w_mbytes_per_sec": 0 00:07:08.980 }, 00:07:08.980 "claimed": false, 00:07:08.980 "zoned": false, 00:07:08.980 "supported_io_types": { 00:07:08.980 "read": true, 00:07:08.980 "write": true, 00:07:08.980 "unmap": true, 00:07:08.980 "flush": true, 00:07:08.980 "reset": true, 00:07:08.980 "nvme_admin": true, 00:07:08.980 "nvme_io": true, 00:07:08.980 "nvme_io_md": false, 00:07:08.980 "write_zeroes": true, 00:07:08.980 "zcopy": false, 00:07:08.980 "get_zone_info": false, 00:07:08.980 "zone_management": false, 00:07:08.980 "zone_append": false, 00:07:08.980 "compare": true, 00:07:08.980 "compare_and_write": true, 00:07:08.980 "abort": true, 00:07:08.980 "seek_hole": false, 00:07:08.980 "seek_data": false, 00:07:08.980 "copy": true, 00:07:08.980 "nvme_iov_md": false 00:07:08.980 }, 00:07:08.980 "memory_domains": [ 00:07:08.980 { 00:07:08.980 "dma_device_id": "system", 00:07:08.980 "dma_device_type": 1 00:07:08.980 } 00:07:08.980 ], 00:07:08.980 "driver_specific": { 00:07:08.980 "nvme": [ 00:07:08.980 { 00:07:08.980 "trid": { 00:07:08.980 "trtype": "TCP", 00:07:08.980 "adrfam": "IPv4", 00:07:08.980 "traddr": "10.0.0.2", 00:07:08.980 "trsvcid": "4420", 00:07:08.980 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:08.980 }, 00:07:08.980 "ctrlr_data": { 00:07:08.980 "cntlid": 1, 00:07:08.980 "vendor_id": "0x8086", 00:07:08.980 "model_number": "SPDK bdev Controller", 00:07:08.980 "serial_number": "SPDK0", 00:07:08.980 "firmware_revision": "25.01", 00:07:08.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.980 "oacs": { 00:07:08.980 "security": 0, 00:07:08.980 "format": 0, 00:07:08.980 "firmware": 0, 00:07:08.980 "ns_manage": 0 00:07:08.980 }, 00:07:08.980 "multi_ctrlr": true, 00:07:08.980 "ana_reporting": false 00:07:08.980 }, 00:07:08.980 "vs": { 00:07:08.980 "nvme_version": "1.3" 00:07:08.980 }, 00:07:08.980 "ns_data": { 00:07:08.980 "id": 1, 00:07:08.980 "can_share": true 00:07:08.980 } 00:07:08.980 } 00:07:08.980 ], 00:07:08.980 "mp_policy": "active_passive" 00:07:08.980 } 00:07:08.980 } 00:07:08.980 ] 00:07:08.980 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1071159 00:07:08.980 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:08.980 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:08.980 Running I/O for 10 seconds... 00:07:09.917 Latency(us) 00:07:09.917 [2024-10-15T10:47:30.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.917 Nvme0n1 : 1.00 23505.00 91.82 0.00 0.00 0.00 0.00 0.00 00:07:09.917 [2024-10-15T10:47:30.236Z] =================================================================================================================== 00:07:09.917 [2024-10-15T10:47:30.236Z] Total : 23505.00 91.82 0.00 0.00 0.00 0.00 0.00 00:07:09.917 00:07:10.927 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:10.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.927 Nvme0n1 : 2.00 23611.00 92.23 0.00 0.00 0.00 0.00 0.00 00:07:10.927 [2024-10-15T10:47:31.246Z] =================================================================================================================== 00:07:10.927 [2024-10-15T10:47:31.246Z] Total : 23611.00 92.23 0.00 0.00 0.00 0.00 0.00 00:07:10.927 00:07:11.199 true 00:07:11.199 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:11.199 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:11.199 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:11.199 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:11.199 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1071159 00:07:12.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.136 Nvme0n1 : 3.00 23642.67 92.35 0.00 0.00 0.00 0.00 0.00 00:07:12.136 [2024-10-15T10:47:32.455Z] =================================================================================================================== 00:07:12.136 [2024-10-15T10:47:32.455Z] Total : 23642.67 92.35 0.00 0.00 0.00 0.00 0.00 00:07:12.136 00:07:13.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.071 Nvme0n1 : 4.00 23711.75 92.62 0.00 0.00 0.00 0.00 0.00 00:07:13.071 [2024-10-15T10:47:33.390Z] =================================================================================================================== 00:07:13.071 [2024-10-15T10:47:33.390Z] Total : 23711.75 92.62 0.00 0.00 0.00 0.00 0.00 00:07:13.071 00:07:14.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.005 Nvme0n1 : 5.00 23775.00 92.87 0.00 0.00 0.00 0.00 0.00 00:07:14.005 [2024-10-15T10:47:34.324Z] =================================================================================================================== 00:07:14.005 [2024-10-15T10:47:34.324Z] Total : 23775.00 92.87 0.00 0.00 0.00 0.00 0.00 00:07:14.005 00:07:14.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.941 Nvme0n1 : 6.00 23801.67 92.98 0.00 0.00 0.00 0.00 0.00 00:07:14.941 [2024-10-15T10:47:35.260Z] =================================================================================================================== 00:07:14.941 [2024-10-15T10:47:35.260Z] Total : 23801.67 92.98 0.00 0.00 0.00 0.00 0.00 00:07:14.941 00:07:15.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.876 Nvme0n1 : 7.00 23840.14 93.13 0.00 0.00 0.00 0.00 0.00 00:07:15.876 [2024-10-15T10:47:36.195Z] =================================================================================================================== 00:07:15.876 [2024-10-15T10:47:36.195Z] Total : 23840.14 93.13 0.00 0.00 0.00 0.00 0.00 00:07:15.876 00:07:17.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.250 Nvme0n1 : 8.00 23877.38 93.27 0.00 0.00 0.00 0.00 0.00 00:07:17.250 [2024-10-15T10:47:37.569Z] =================================================================================================================== 00:07:17.250 [2024-10-15T10:47:37.569Z] Total : 23877.38 93.27 0.00 0.00 0.00 0.00 0.00 00:07:17.250 00:07:18.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.185 Nvme0n1 : 9.00 23894.44 93.34 0.00 0.00 0.00 0.00 0.00 00:07:18.185 [2024-10-15T10:47:38.504Z] =================================================================================================================== 00:07:18.185 [2024-10-15T10:47:38.504Z] Total : 23894.44 93.34 0.00 0.00 0.00 0.00 0.00 00:07:18.185 00:07:19.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.122 Nvme0n1 : 10.00 23911.90 93.41 0.00 0.00 0.00 0.00 0.00 00:07:19.122 [2024-10-15T10:47:39.441Z] =================================================================================================================== 00:07:19.122 [2024-10-15T10:47:39.441Z] Total : 23911.90 93.41 0.00 0.00 0.00 0.00 0.00 00:07:19.122 00:07:19.122 00:07:19.122 Latency(us) 00:07:19.122 [2024-10-15T10:47:39.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.122 Nvme0n1 : 10.00 23915.68 93.42 0.00 0.00 5349.46 2293.76 9986.44 00:07:19.122 [2024-10-15T10:47:39.441Z] =================================================================================================================== 00:07:19.122 [2024-10-15T10:47:39.441Z] Total : 23915.68 93.42 0.00 0.00 5349.46 2293.76 9986.44 00:07:19.122 { 00:07:19.122 "results": [ 00:07:19.122 { 00:07:19.122 "job": "Nvme0n1", 00:07:19.122 "core_mask": "0x2", 00:07:19.122 "workload": "randwrite", 00:07:19.122 "status": "finished", 00:07:19.122 "queue_depth": 128, 00:07:19.122 "io_size": 4096, 00:07:19.122 "runtime": 10.003772, 00:07:19.122 "iops": 23915.67900587898, 00:07:19.122 "mibps": 93.42062111671477, 00:07:19.122 "io_failed": 0, 00:07:19.122 "io_timeout": 0, 00:07:19.122 "avg_latency_us": 5349.455355415713, 00:07:19.122 "min_latency_us": 2293.76, 00:07:19.122 "max_latency_us": 9986.438095238096 00:07:19.122 } 00:07:19.122 ], 00:07:19.122 "core_count": 1 00:07:19.122 } 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1071130 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1071130 ']' 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1071130 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071130 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071130' 00:07:19.122 killing process with pid 1071130 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1071130 00:07:19.122 Received shutdown signal, test time was about 10.000000 seconds 00:07:19.122 00:07:19.122 Latency(us) 00:07:19.122 [2024-10-15T10:47:39.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.122 [2024-10-15T10:47:39.441Z] =================================================================================================================== 00:07:19.122 [2024-10-15T10:47:39.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1071130 00:07:19.122 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.380 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.639 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:19.639 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:19.900 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:19.900 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:19.900 12:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.900 [2024-10-15 12:47:40.170300] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:20.160 request: 00:07:20.160 { 00:07:20.160 "uuid": "07aa57d2-3280-4576-9856-a1cd9da1b53d", 00:07:20.160 "method": "bdev_lvol_get_lvstores", 00:07:20.160 "req_id": 1 00:07:20.160 } 00:07:20.160 Got JSON-RPC error response 00:07:20.160 response: 00:07:20.160 { 00:07:20.160 "code": -19, 00:07:20.160 "message": "No such device" 00:07:20.160 } 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.160 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.418 aio_bdev 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5a6ebb59-a44f-4ad5-b391-201fe8af7b80 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5a6ebb59-a44f-4ad5-b391-201fe8af7b80 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.418 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:20.677 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5a6ebb59-a44f-4ad5-b391-201fe8af7b80 -t 2000 00:07:20.677 [ 00:07:20.677 { 00:07:20.677 "name": "5a6ebb59-a44f-4ad5-b391-201fe8af7b80", 00:07:20.677 "aliases": [ 00:07:20.677 "lvs/lvol" 00:07:20.677 ], 00:07:20.677 "product_name": "Logical Volume", 00:07:20.677 "block_size": 4096, 00:07:20.677 "num_blocks": 38912, 00:07:20.677 "uuid": "5a6ebb59-a44f-4ad5-b391-201fe8af7b80", 00:07:20.677 "assigned_rate_limits": { 00:07:20.677 "rw_ios_per_sec": 0, 00:07:20.677 "rw_mbytes_per_sec": 0, 00:07:20.677 "r_mbytes_per_sec": 0, 00:07:20.677 "w_mbytes_per_sec": 0 00:07:20.677 }, 00:07:20.677 "claimed": false, 00:07:20.677 "zoned": false, 00:07:20.677 "supported_io_types": { 00:07:20.677 "read": true, 00:07:20.677 "write": true, 00:07:20.677 "unmap": true, 00:07:20.677 "flush": false, 00:07:20.677 "reset": true, 00:07:20.677 "nvme_admin": false, 00:07:20.677 "nvme_io": false, 00:07:20.677 "nvme_io_md": false, 00:07:20.677 "write_zeroes": true, 00:07:20.677 "zcopy": false, 00:07:20.677 "get_zone_info": false, 00:07:20.677 "zone_management": false, 00:07:20.677 "zone_append": false, 00:07:20.677 "compare": false, 00:07:20.677 "compare_and_write": false, 00:07:20.677 "abort": false, 00:07:20.677 "seek_hole": true, 00:07:20.677 "seek_data": true, 00:07:20.677 "copy": false, 00:07:20.677 "nvme_iov_md": false 00:07:20.677 }, 00:07:20.677 "driver_specific": { 00:07:20.677 "lvol": { 00:07:20.677 "lvol_store_uuid": "07aa57d2-3280-4576-9856-a1cd9da1b53d", 00:07:20.677 "base_bdev": "aio_bdev", 00:07:20.677 "thin_provision": false, 00:07:20.677 "num_allocated_clusters": 38, 00:07:20.677 "snapshot": false, 00:07:20.677 "clone": false, 00:07:20.677 "esnap_clone": false 00:07:20.677 } 00:07:20.677 } 00:07:20.677 } 00:07:20.677 ] 00:07:20.677 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:20.677 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:20.677 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:20.935 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:20.935 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:20.935 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:21.194 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:21.194 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5a6ebb59-a44f-4ad5-b391-201fe8af7b80 00:07:21.452 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07aa57d2-3280-4576-9856-a1cd9da1b53d 00:07:21.452 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.710 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.710 00:07:21.710 real 0m15.537s 00:07:21.710 user 0m15.066s 00:07:21.710 sys 0m1.479s 00:07:21.710 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.710 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.710 ************************************ 00:07:21.710 END TEST lvs_grow_clean 00:07:21.710 ************************************ 00:07:21.710 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:21.710 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.710 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.710 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.969 ************************************ 00:07:21.969 START TEST lvs_grow_dirty 00:07:21.969 ************************************ 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.969 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.228 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7fb93c2a-a200-429b-9f21-88d90973e707 00:07:22.228 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:22.228 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.486 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.486 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.487 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fb93c2a-a200-429b-9f21-88d90973e707 lvol 150 00:07:22.745 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:22.745 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.745 12:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.745 [2024-10-15 12:47:43.001549] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.745 [2024-10-15 12:47:43.001606] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.745 true 00:07:22.745 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:22.745 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:23.003 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.003 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.262 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:23.522 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.522 [2024-10-15 12:47:43.775859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.522 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1073738 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1073738 /var/tmp/bdevperf.sock 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1073738 ']' 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.782 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.782 [2024-10-15 12:47:44.018833] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:23.782 [2024-10-15 12:47:44.018877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073738 ] 00:07:23.782 [2024-10-15 12:47:44.085550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.040 [2024-10-15 12:47:44.125574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.040 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.040 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:24.040 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.608 Nvme0n1 00:07:24.608 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:24.608 [ 00:07:24.608 { 00:07:24.608 "name": "Nvme0n1", 00:07:24.608 "aliases": [ 00:07:24.608 "14159cdc-c813-47a7-82c5-ddc8a72ecd98" 00:07:24.608 ], 00:07:24.608 "product_name": "NVMe disk", 00:07:24.608 "block_size": 4096, 00:07:24.608 "num_blocks": 38912, 00:07:24.608 "uuid": "14159cdc-c813-47a7-82c5-ddc8a72ecd98", 00:07:24.608 "numa_id": 1, 00:07:24.608 "assigned_rate_limits": { 00:07:24.608 "rw_ios_per_sec": 0, 00:07:24.608 "rw_mbytes_per_sec": 0, 00:07:24.608 "r_mbytes_per_sec": 0, 00:07:24.608 "w_mbytes_per_sec": 0 00:07:24.608 }, 00:07:24.608 "claimed": false, 00:07:24.608 "zoned": false, 00:07:24.608 "supported_io_types": { 00:07:24.608 "read": true, 00:07:24.608 "write": true, 00:07:24.608 "unmap": true, 00:07:24.608 "flush": true, 00:07:24.608 "reset": true, 00:07:24.608 "nvme_admin": true, 00:07:24.608 "nvme_io": true, 00:07:24.608 "nvme_io_md": false, 00:07:24.608 "write_zeroes": true, 00:07:24.608 "zcopy": false, 00:07:24.608 "get_zone_info": false, 00:07:24.608 "zone_management": false, 00:07:24.608 "zone_append": false, 00:07:24.608 "compare": true, 00:07:24.608 "compare_and_write": true, 00:07:24.608 "abort": true, 00:07:24.608 "seek_hole": false, 00:07:24.608 "seek_data": false, 00:07:24.608 "copy": true, 00:07:24.608 "nvme_iov_md": false 00:07:24.608 }, 00:07:24.608 "memory_domains": [ 00:07:24.608 { 00:07:24.608 "dma_device_id": "system", 00:07:24.608 "dma_device_type": 1 00:07:24.608 } 00:07:24.608 ], 00:07:24.608 "driver_specific": { 00:07:24.608 "nvme": [ 00:07:24.608 { 00:07:24.608 "trid": { 00:07:24.608 "trtype": "TCP", 00:07:24.608 "adrfam": "IPv4", 00:07:24.608 "traddr": "10.0.0.2", 00:07:24.608 "trsvcid": "4420", 00:07:24.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:24.608 }, 00:07:24.608 "ctrlr_data": { 00:07:24.608 "cntlid": 1, 00:07:24.608 "vendor_id": "0x8086", 00:07:24.608 "model_number": "SPDK bdev Controller", 00:07:24.608 "serial_number": "SPDK0", 00:07:24.608 "firmware_revision": "25.01", 00:07:24.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.608 "oacs": { 00:07:24.608 "security": 0, 00:07:24.608 "format": 0, 00:07:24.608 "firmware": 0, 00:07:24.608 "ns_manage": 0 00:07:24.608 }, 00:07:24.608 "multi_ctrlr": true, 00:07:24.608 "ana_reporting": false 00:07:24.608 }, 00:07:24.608 "vs": { 00:07:24.608 "nvme_version": "1.3" 00:07:24.608 }, 00:07:24.608 "ns_data": { 00:07:24.608 "id": 1, 00:07:24.608 "can_share": true 00:07:24.608 } 00:07:24.608 } 00:07:24.608 ], 00:07:24.608 "mp_policy": "active_passive" 00:07:24.608 } 00:07:24.608 } 00:07:24.608 ] 00:07:24.608 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1073967 00:07:24.608 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:24.608 12:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.867 Running I/O for 10 seconds... 00:07:25.803 Latency(us) 00:07:25.803 [2024-10-15T10:47:46.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.803 Nvme0n1 : 1.00 23423.00 91.50 0.00 0.00 0.00 0.00 0.00 00:07:25.804 [2024-10-15T10:47:46.123Z] =================================================================================================================== 00:07:25.804 [2024-10-15T10:47:46.123Z] Total : 23423.00 91.50 0.00 0.00 0.00 0.00 0.00 00:07:25.804 00:07:26.740 12:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:26.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.741 Nvme0n1 : 2.00 23592.50 92.16 0.00 0.00 0.00 0.00 0.00 00:07:26.741 [2024-10-15T10:47:47.060Z] =================================================================================================================== 00:07:26.741 [2024-10-15T10:47:47.060Z] Total : 23592.50 92.16 0.00 0.00 0.00 0.00 0.00 00:07:26.741 00:07:26.741 true 00:07:26.741 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:26.741 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:26.999 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:26.999 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:26.999 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1073967 00:07:27.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.936 Nvme0n1 : 3.00 23649.67 92.38 0.00 0.00 0.00 0.00 0.00 00:07:27.936 [2024-10-15T10:47:48.255Z] =================================================================================================================== 00:07:27.936 [2024-10-15T10:47:48.255Z] Total : 23649.67 92.38 0.00 0.00 0.00 0.00 0.00 00:07:27.936 00:07:28.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.874 Nvme0n1 : 4.00 23703.50 92.59 0.00 0.00 0.00 0.00 0.00 00:07:28.874 [2024-10-15T10:47:49.193Z] =================================================================================================================== 00:07:28.874 [2024-10-15T10:47:49.193Z] Total : 23703.50 92.59 0.00 0.00 0.00 0.00 0.00 00:07:28.874 00:07:29.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.811 Nvme0n1 : 5.00 23737.20 92.72 0.00 0.00 0.00 0.00 0.00 00:07:29.811 [2024-10-15T10:47:50.130Z] =================================================================================================================== 00:07:29.811 [2024-10-15T10:47:50.130Z] Total : 23737.20 92.72 0.00 0.00 0.00 0.00 0.00 00:07:29.811 00:07:30.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.746 Nvme0n1 : 6.00 23710.00 92.62 0.00 0.00 0.00 0.00 0.00 00:07:30.746 [2024-10-15T10:47:51.065Z] =================================================================================================================== 00:07:30.746 [2024-10-15T10:47:51.065Z] Total : 23710.00 92.62 0.00 0.00 0.00 0.00 0.00 00:07:30.746 00:07:31.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.683 Nvme0n1 : 7.00 23754.71 92.79 0.00 0.00 0.00 0.00 0.00 00:07:31.683 [2024-10-15T10:47:52.002Z] =================================================================================================================== 00:07:31.683 [2024-10-15T10:47:52.002Z] Total : 23754.71 92.79 0.00 0.00 0.00 0.00 0.00 00:07:31.683 00:07:33.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.061 Nvme0n1 : 8.00 23772.50 92.86 0.00 0.00 0.00 0.00 0.00 00:07:33.061 [2024-10-15T10:47:53.380Z] =================================================================================================================== 00:07:33.061 [2024-10-15T10:47:53.380Z] Total : 23772.50 92.86 0.00 0.00 0.00 0.00 0.00 00:07:33.061 00:07:33.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.999 Nvme0n1 : 9.00 23799.78 92.97 0.00 0.00 0.00 0.00 0.00 00:07:33.999 [2024-10-15T10:47:54.318Z] =================================================================================================================== 00:07:33.999 [2024-10-15T10:47:54.318Z] Total : 23799.78 92.97 0.00 0.00 0.00 0.00 0.00 00:07:33.999 00:07:34.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.936 Nvme0n1 : 10.00 23826.60 93.07 0.00 0.00 0.00 0.00 0.00 00:07:34.936 [2024-10-15T10:47:55.255Z] =================================================================================================================== 00:07:34.936 [2024-10-15T10:47:55.255Z] Total : 23826.60 93.07 0.00 0.00 0.00 0.00 0.00 00:07:34.936 00:07:34.936 00:07:34.936 Latency(us) 00:07:34.936 [2024-10-15T10:47:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.936 Nvme0n1 : 10.00 23828.94 93.08 0.00 0.00 5368.76 3167.57 11484.40 00:07:34.936 [2024-10-15T10:47:55.255Z] =================================================================================================================== 00:07:34.936 [2024-10-15T10:47:55.255Z] Total : 23828.94 93.08 0.00 0.00 5368.76 3167.57 11484.40 00:07:34.936 { 00:07:34.936 "results": [ 00:07:34.936 { 00:07:34.936 "job": "Nvme0n1", 00:07:34.936 "core_mask": "0x2", 00:07:34.936 "workload": "randwrite", 00:07:34.936 "status": "finished", 00:07:34.936 "queue_depth": 128, 00:07:34.936 "io_size": 4096, 00:07:34.936 "runtime": 10.00439, 00:07:34.936 "iops": 23828.939095736972, 00:07:34.936 "mibps": 93.08179334272255, 00:07:34.936 "io_failed": 0, 00:07:34.936 "io_timeout": 0, 00:07:34.936 "avg_latency_us": 5368.759243525225, 00:07:34.936 "min_latency_us": 3167.5733333333333, 00:07:34.936 "max_latency_us": 11484.40380952381 00:07:34.936 } 00:07:34.936 ], 00:07:34.936 "core_count": 1 00:07:34.936 } 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1073738 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1073738 ']' 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1073738 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.936 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1073738 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1073738' 00:07:34.936 killing process with pid 1073738 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1073738 00:07:34.936 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.936 00:07:34.936 Latency(us) 00:07:34.936 [2024-10-15T10:47:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.936 [2024-10-15T10:47:55.255Z] =================================================================================================================== 00:07:34.936 [2024-10-15T10:47:55.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1073738 00:07:34.936 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.196 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.454 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:35.454 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1070628 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1070628 00:07:35.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1070628 Killed "${NVMF_APP[@]}" "$@" 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.713 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1075815 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1075815 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1075815 ']' 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.714 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.714 [2024-10-15 12:47:55.913330] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:35.714 [2024-10-15 12:47:55.913376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.714 [2024-10-15 12:47:55.986420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.714 [2024-10-15 12:47:56.026476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.714 [2024-10-15 12:47:56.026509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.714 [2024-10-15 12:47:56.026516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.714 [2024-10-15 12:47:56.026522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.714 [2024-10-15 12:47:56.026527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.714 [2024-10-15 12:47:56.027095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.973 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.231 [2024-10-15 12:47:56.311811] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:36.231 [2024-10-15 12:47:56.311886] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:36.231 [2024-10-15 12:47:56.311910] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:36.231 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:36.231 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.232 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 14159cdc-c813-47a7-82c5-ddc8a72ecd98 -t 2000 00:07:36.489 [ 00:07:36.489 { 00:07:36.489 "name": "14159cdc-c813-47a7-82c5-ddc8a72ecd98", 00:07:36.489 "aliases": [ 00:07:36.489 "lvs/lvol" 00:07:36.489 ], 00:07:36.489 "product_name": "Logical Volume", 00:07:36.489 "block_size": 4096, 00:07:36.489 "num_blocks": 38912, 00:07:36.489 "uuid": "14159cdc-c813-47a7-82c5-ddc8a72ecd98", 00:07:36.489 "assigned_rate_limits": { 00:07:36.489 "rw_ios_per_sec": 0, 00:07:36.489 "rw_mbytes_per_sec": 0, 00:07:36.489 "r_mbytes_per_sec": 0, 00:07:36.489 "w_mbytes_per_sec": 0 00:07:36.489 }, 00:07:36.489 "claimed": false, 00:07:36.489 "zoned": false, 00:07:36.489 "supported_io_types": { 00:07:36.489 "read": true, 00:07:36.489 "write": true, 00:07:36.489 "unmap": true, 00:07:36.489 "flush": false, 00:07:36.489 "reset": true, 00:07:36.489 "nvme_admin": false, 00:07:36.489 "nvme_io": false, 00:07:36.489 "nvme_io_md": false, 00:07:36.490 "write_zeroes": true, 00:07:36.490 "zcopy": false, 00:07:36.490 "get_zone_info": false, 00:07:36.490 "zone_management": false, 00:07:36.490 "zone_append": false, 00:07:36.490 "compare": false, 00:07:36.490 "compare_and_write": false, 00:07:36.490 "abort": false, 00:07:36.490 "seek_hole": true, 00:07:36.490 "seek_data": true, 00:07:36.490 "copy": false, 00:07:36.490 "nvme_iov_md": false 00:07:36.490 }, 00:07:36.490 "driver_specific": { 00:07:36.490 "lvol": { 00:07:36.490 "lvol_store_uuid": "7fb93c2a-a200-429b-9f21-88d90973e707", 00:07:36.490 "base_bdev": "aio_bdev", 00:07:36.490 "thin_provision": false, 00:07:36.490 "num_allocated_clusters": 38, 00:07:36.490 "snapshot": false, 00:07:36.490 "clone": false, 00:07:36.490 "esnap_clone": false 00:07:36.490 } 00:07:36.490 } 00:07:36.490 } 00:07:36.490 ] 00:07:36.490 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:36.490 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:36.490 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:36.748 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:36.748 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:36.748 12:47:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.007 [2024-10-15 12:47:57.248683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.007 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:37.265 request: 00:07:37.265 { 00:07:37.265 "uuid": "7fb93c2a-a200-429b-9f21-88d90973e707", 00:07:37.265 "method": "bdev_lvol_get_lvstores", 00:07:37.265 "req_id": 1 00:07:37.265 } 00:07:37.265 Got JSON-RPC error response 00:07:37.265 response: 00:07:37.265 { 00:07:37.265 "code": -19, 00:07:37.265 "message": "No such device" 00:07:37.265 } 00:07:37.265 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:37.265 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.265 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.265 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.266 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.524 aio_bdev 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.524 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.782 12:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 14159cdc-c813-47a7-82c5-ddc8a72ecd98 -t 2000 00:07:37.782 [ 00:07:37.782 { 00:07:37.782 "name": "14159cdc-c813-47a7-82c5-ddc8a72ecd98", 00:07:37.782 "aliases": [ 00:07:37.782 "lvs/lvol" 00:07:37.782 ], 00:07:37.782 "product_name": "Logical Volume", 00:07:37.782 "block_size": 4096, 00:07:37.782 "num_blocks": 38912, 00:07:37.782 "uuid": "14159cdc-c813-47a7-82c5-ddc8a72ecd98", 00:07:37.782 "assigned_rate_limits": { 00:07:37.782 "rw_ios_per_sec": 0, 00:07:37.782 "rw_mbytes_per_sec": 0, 00:07:37.783 "r_mbytes_per_sec": 0, 00:07:37.783 "w_mbytes_per_sec": 0 00:07:37.783 }, 00:07:37.783 "claimed": false, 00:07:37.783 "zoned": false, 00:07:37.783 "supported_io_types": { 00:07:37.783 "read": true, 00:07:37.783 "write": true, 00:07:37.783 "unmap": true, 00:07:37.783 "flush": false, 00:07:37.783 "reset": true, 00:07:37.783 "nvme_admin": false, 00:07:37.783 "nvme_io": false, 00:07:37.783 "nvme_io_md": false, 00:07:37.783 "write_zeroes": true, 00:07:37.783 "zcopy": false, 00:07:37.783 "get_zone_info": false, 00:07:37.783 "zone_management": false, 00:07:37.783 "zone_append": false, 00:07:37.783 "compare": false, 00:07:37.783 "compare_and_write": false, 00:07:37.783 "abort": false, 00:07:37.783 "seek_hole": true, 00:07:37.783 "seek_data": true, 00:07:37.783 "copy": false, 00:07:37.783 "nvme_iov_md": false 00:07:37.783 }, 00:07:37.783 "driver_specific": { 00:07:37.783 "lvol": { 00:07:37.783 "lvol_store_uuid": "7fb93c2a-a200-429b-9f21-88d90973e707", 00:07:37.783 "base_bdev": "aio_bdev", 00:07:37.783 "thin_provision": false, 00:07:37.783 "num_allocated_clusters": 38, 00:07:37.784 "snapshot": false, 00:07:37.784 "clone": false, 00:07:37.784 "esnap_clone": false 00:07:37.784 } 00:07:37.784 } 00:07:37.784 } 00:07:37.784 ] 00:07:37.784 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:37.784 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:37.784 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:38.042 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.042 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:38.042 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.301 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.301 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 14159cdc-c813-47a7-82c5-ddc8a72ecd98 00:07:38.560 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fb93c2a-a200-429b-9f21-88d90973e707 00:07:38.560 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.818 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.819 00:07:38.819 real 0m16.989s 00:07:38.819 user 0m43.843s 00:07:38.819 sys 0m3.771s 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.819 ************************************ 00:07:38.819 END TEST lvs_grow_dirty 00:07:38.819 ************************************ 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:38.819 nvmf_trace.0 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.819 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.819 rmmod nvme_tcp 00:07:39.078 rmmod nvme_fabrics 00:07:39.078 rmmod nvme_keyring 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1075815 ']' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1075815 ']' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1075815' 00:07:39.078 killing process with pid 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1075815 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.078 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.614 00:07:41.614 real 0m41.824s 00:07:41.614 user 1m4.481s 00:07:41.614 sys 0m10.236s 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 ************************************ 00:07:41.614 END TEST nvmf_lvs_grow 00:07:41.614 ************************************ 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 ************************************ 00:07:41.614 START TEST nvmf_bdev_io_wait 00:07:41.614 ************************************ 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.614 * Looking for test storage... 00:07:41.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:41.614 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.615 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.616 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:41.616 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:41.616 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.616 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.189 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.189 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:07:48.189 00:07:48.189 --- 10.0.0.2 ping statistics --- 00:07:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.189 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:48.189 00:07:48.189 --- 10.0.0.1 ping statistics --- 00:07:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.189 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1080002 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1080002 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1080002 ']' 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.189 [2024-10-15 12:48:07.826900] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:48.189 [2024-10-15 12:48:07.826945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.189 [2024-10-15 12:48:07.899007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.189 [2024-10-15 12:48:07.944709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.189 [2024-10-15 12:48:07.944743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.189 [2024-10-15 12:48:07.944751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.189 [2024-10-15 12:48:07.944757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.189 [2024-10-15 12:48:07.944762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.189 [2024-10-15 12:48:07.946347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.189 [2024-10-15 12:48:07.946376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.189 [2024-10-15 12:48:07.946503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.189 [2024-10-15 12:48:07.946504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.189 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.190 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:48.190 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:48.190 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.190 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 [2024-10-15 12:48:08.087289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 Malloc0 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.190 [2024-10-15 12:48:08.142659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1080175 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1080178 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:48.190 { 00:07:48.190 "params": { 00:07:48.190 "name": "Nvme$subsystem", 00:07:48.190 "trtype": "$TEST_TRANSPORT", 00:07:48.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.190 "adrfam": "ipv4", 00:07:48.190 "trsvcid": "$NVMF_PORT", 00:07:48.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.190 "hdgst": ${hdgst:-false}, 00:07:48.190 "ddgst": ${ddgst:-false} 00:07:48.190 }, 00:07:48.190 "method": "bdev_nvme_attach_controller" 00:07:48.190 } 00:07:48.190 EOF 00:07:48.190 )") 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1080183 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:48.190 { 00:07:48.190 "params": { 00:07:48.190 "name": "Nvme$subsystem", 00:07:48.190 "trtype": "$TEST_TRANSPORT", 00:07:48.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.190 "adrfam": "ipv4", 00:07:48.190 "trsvcid": "$NVMF_PORT", 00:07:48.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.190 "hdgst": ${hdgst:-false}, 00:07:48.190 "ddgst": ${ddgst:-false} 00:07:48.190 }, 00:07:48.190 "method": "bdev_nvme_attach_controller" 00:07:48.190 } 00:07:48.190 EOF 00:07:48.190 )") 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1080187 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:48.190 { 00:07:48.190 "params": { 00:07:48.190 "name": "Nvme$subsystem", 00:07:48.190 "trtype": "$TEST_TRANSPORT", 00:07:48.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.190 "adrfam": "ipv4", 00:07:48.190 "trsvcid": "$NVMF_PORT", 00:07:48.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.190 "hdgst": ${hdgst:-false}, 00:07:48.190 "ddgst": ${ddgst:-false} 00:07:48.190 }, 00:07:48.190 "method": "bdev_nvme_attach_controller" 00:07:48.190 } 00:07:48.190 EOF 00:07:48.190 )") 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:48.190 { 00:07:48.190 "params": { 00:07:48.190 "name": "Nvme$subsystem", 00:07:48.190 "trtype": "$TEST_TRANSPORT", 00:07:48.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.190 "adrfam": "ipv4", 00:07:48.190 "trsvcid": "$NVMF_PORT", 00:07:48.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.190 "hdgst": ${hdgst:-false}, 00:07:48.190 "ddgst": ${ddgst:-false} 00:07:48.190 }, 00:07:48.190 "method": "bdev_nvme_attach_controller" 00:07:48.190 } 00:07:48.190 EOF 00:07:48.190 )") 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1080175 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:48.190 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:48.191 "params": { 00:07:48.191 "name": "Nvme1", 00:07:48.191 "trtype": "tcp", 00:07:48.191 "traddr": "10.0.0.2", 00:07:48.191 "adrfam": "ipv4", 00:07:48.191 "trsvcid": "4420", 00:07:48.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.191 "hdgst": false, 00:07:48.191 "ddgst": false 00:07:48.191 }, 00:07:48.191 "method": "bdev_nvme_attach_controller" 00:07:48.191 }' 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:48.191 "params": { 00:07:48.191 "name": "Nvme1", 00:07:48.191 "trtype": "tcp", 00:07:48.191 "traddr": "10.0.0.2", 00:07:48.191 "adrfam": "ipv4", 00:07:48.191 "trsvcid": "4420", 00:07:48.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.191 "hdgst": false, 00:07:48.191 "ddgst": false 00:07:48.191 }, 00:07:48.191 "method": "bdev_nvme_attach_controller" 00:07:48.191 }' 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:48.191 "params": { 00:07:48.191 "name": "Nvme1", 00:07:48.191 "trtype": "tcp", 00:07:48.191 "traddr": "10.0.0.2", 00:07:48.191 "adrfam": "ipv4", 00:07:48.191 "trsvcid": "4420", 00:07:48.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.191 "hdgst": false, 00:07:48.191 "ddgst": false 00:07:48.191 }, 00:07:48.191 "method": "bdev_nvme_attach_controller" 00:07:48.191 }' 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:48.191 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:48.191 "params": { 00:07:48.191 "name": "Nvme1", 00:07:48.191 "trtype": "tcp", 00:07:48.191 "traddr": "10.0.0.2", 00:07:48.191 "adrfam": "ipv4", 00:07:48.191 "trsvcid": "4420", 00:07:48.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.191 "hdgst": false, 00:07:48.191 "ddgst": false 00:07:48.191 }, 00:07:48.191 "method": "bdev_nvme_attach_controller" 00:07:48.191 }' 00:07:48.191 [2024-10-15 12:48:08.193539] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:48.191 [2024-10-15 12:48:08.193596] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:48.191 [2024-10-15 12:48:08.197751] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:48.191 [2024-10-15 12:48:08.197799] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:48.191 [2024-10-15 12:48:08.199667] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:48.191 [2024-10-15 12:48:08.199687] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:48.191 [2024-10-15 12:48:08.199713] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:48.191 [2024-10-15 12:48:08.199727] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:48.191 [2024-10-15 12:48:08.374339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.191 [2024-10-15 12:48:08.416708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.191 [2024-10-15 12:48:08.471867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.450 [2024-10-15 12:48:08.514527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:48.450 [2024-10-15 12:48:08.568792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.450 [2024-10-15 12:48:08.616575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:48.450 [2024-10-15 12:48:08.628841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.450 [2024-10-15 12:48:08.671098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:48.450 Running I/O for 1 seconds... 00:07:48.450 Running I/O for 1 seconds... 00:07:48.708 Running I/O for 1 seconds... 00:07:48.708 Running I/O for 1 seconds... 00:07:49.645 13615.00 IOPS, 53.18 MiB/s 00:07:49.645 Latency(us) 00:07:49.645 [2024-10-15T10:48:09.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.645 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:49.645 Nvme1n1 : 1.01 13678.55 53.43 0.00 0.00 9330.53 4181.82 15354.15 00:07:49.645 [2024-10-15T10:48:09.964Z] =================================================================================================================== 00:07:49.645 [2024-10-15T10:48:09.964Z] Total : 13678.55 53.43 0.00 0.00 9330.53 4181.82 15354.15 00:07:49.645 7073.00 IOPS, 27.63 MiB/s 00:07:49.645 Latency(us) 00:07:49.645 [2024-10-15T10:48:09.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.645 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:49.645 Nvme1n1 : 1.02 7090.38 27.70 0.00 0.00 17885.19 8113.98 30833.13 00:07:49.645 [2024-10-15T10:48:09.964Z] =================================================================================================================== 00:07:49.645 [2024-10-15T10:48:09.964Z] Total : 7090.38 27.70 0.00 0.00 17885.19 8113.98 30833.13 00:07:49.645 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1080178 00:07:49.645 254160.00 IOPS, 992.81 MiB/s 00:07:49.645 Latency(us) 00:07:49.645 [2024-10-15T10:48:09.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.645 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:49.645 Nvme1n1 : 1.00 253770.85 991.29 0.00 0.00 501.53 227.23 1505.77 00:07:49.645 [2024-10-15T10:48:09.964Z] =================================================================================================================== 00:07:49.645 [2024-10-15T10:48:09.964Z] Total : 253770.85 991.29 0.00 0.00 501.53 227.23 1505.77 00:07:49.903 7724.00 IOPS, 30.17 MiB/s 00:07:49.903 Latency(us) 00:07:49.903 [2024-10-15T10:48:10.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.904 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:49.904 Nvme1n1 : 1.01 7810.61 30.51 0.00 0.00 16338.59 4618.73 43940.33 00:07:49.904 [2024-10-15T10:48:10.223Z] =================================================================================================================== 00:07:49.904 [2024-10-15T10:48:10.223Z] Total : 7810.61 30.51 0.00 0.00 16338.59 4618.73 43940.33 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1080183 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1080187 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:49.904 rmmod nvme_tcp 00:07:49.904 rmmod nvme_fabrics 00:07:49.904 rmmod nvme_keyring 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1080002 ']' 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1080002 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1080002 ']' 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1080002 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1080002 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1080002' 00:07:49.904 killing process with pid 1080002 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1080002 00:07:49.904 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1080002 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.163 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.701 00:07:52.701 real 0m10.906s 00:07:52.701 user 0m16.577s 00:07:52.701 sys 0m6.111s 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.701 ************************************ 00:07:52.701 END TEST nvmf_bdev_io_wait 00:07:52.701 ************************************ 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.701 ************************************ 00:07:52.701 START TEST nvmf_queue_depth 00:07:52.701 ************************************ 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.701 * Looking for test storage... 00:07:52.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.701 --rc genhtml_branch_coverage=1 00:07:52.701 --rc genhtml_function_coverage=1 00:07:52.701 --rc genhtml_legend=1 00:07:52.701 --rc geninfo_all_blocks=1 00:07:52.701 --rc geninfo_unexecuted_blocks=1 00:07:52.701 00:07:52.701 ' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.701 --rc genhtml_branch_coverage=1 00:07:52.701 --rc genhtml_function_coverage=1 00:07:52.701 --rc genhtml_legend=1 00:07:52.701 --rc geninfo_all_blocks=1 00:07:52.701 --rc geninfo_unexecuted_blocks=1 00:07:52.701 00:07:52.701 ' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.701 --rc genhtml_branch_coverage=1 00:07:52.701 --rc genhtml_function_coverage=1 00:07:52.701 --rc genhtml_legend=1 00:07:52.701 --rc geninfo_all_blocks=1 00:07:52.701 --rc geninfo_unexecuted_blocks=1 00:07:52.701 00:07:52.701 ' 00:07:52.701 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.701 --rc genhtml_branch_coverage=1 00:07:52.701 --rc genhtml_function_coverage=1 00:07:52.701 --rc genhtml_legend=1 00:07:52.702 --rc geninfo_all_blocks=1 00:07:52.702 --rc geninfo_unexecuted_blocks=1 00:07:52.702 00:07:52.702 ' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.702 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.272 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:59.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:59.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:59.273 Found net devices under 0000:86:00.0: cvl_0_0 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:59.273 Found net devices under 0000:86:00.1: cvl_0_1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:07:59.273 00:07:59.273 --- 10.0.0.2 ping statistics --- 00:07:59.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.273 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:07:59.273 00:07:59.273 --- 10.0.0.1 ping statistics --- 00:07:59.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.273 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1084428 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1084428 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1084428 ']' 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.273 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.274 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.274 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 [2024-10-15 12:48:18.830017] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:59.274 [2024-10-15 12:48:18.830066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.274 [2024-10-15 12:48:18.904167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.274 [2024-10-15 12:48:18.945686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.274 [2024-10-15 12:48:18.945722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.274 [2024-10-15 12:48:18.945729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.274 [2024-10-15 12:48:18.945735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.274 [2024-10-15 12:48:18.945740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.274 [2024-10-15 12:48:18.946296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 [2024-10-15 12:48:19.084444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 Malloc0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 [2024-10-15 12:48:19.134583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1084447 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1084447 /var/tmp/bdevperf.sock 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1084447 ']' 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 [2024-10-15 12:48:19.186794] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:07:59.274 [2024-10-15 12:48:19.186837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084447 ] 00:07:59.274 [2024-10-15 12:48:19.255579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.274 [2024-10-15 12:48:19.295947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 NVMe0n1 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.274 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.533 Running I/O for 10 seconds... 00:08:01.549 11781.00 IOPS, 46.02 MiB/s [2024-10-15T10:48:22.807Z] 12074.00 IOPS, 47.16 MiB/s [2024-10-15T10:48:23.743Z] 12040.33 IOPS, 47.03 MiB/s [2024-10-15T10:48:24.681Z] 12228.25 IOPS, 47.77 MiB/s [2024-10-15T10:48:25.618Z] 12281.80 IOPS, 47.98 MiB/s [2024-10-15T10:48:26.992Z] 12345.67 IOPS, 48.23 MiB/s [2024-10-15T10:48:27.928Z] 12396.14 IOPS, 48.42 MiB/s [2024-10-15T10:48:28.867Z] 12406.00 IOPS, 48.46 MiB/s [2024-10-15T10:48:29.805Z] 12447.44 IOPS, 48.62 MiB/s [2024-10-15T10:48:29.805Z] 12458.40 IOPS, 48.67 MiB/s 00:08:09.486 Latency(us) 00:08:09.486 [2024-10-15T10:48:29.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:09.486 Verification LBA range: start 0x0 length 0x4000 00:08:09.486 NVMe0n1 : 10.06 12478.71 48.74 0.00 0.00 81803.28 19348.72 52428.80 00:08:09.486 [2024-10-15T10:48:29.805Z] =================================================================================================================== 00:08:09.486 [2024-10-15T10:48:29.805Z] Total : 12478.71 48.74 0.00 0.00 81803.28 19348.72 52428.80 00:08:09.486 { 00:08:09.486 "results": [ 00:08:09.486 { 00:08:09.486 "job": "NVMe0n1", 00:08:09.486 "core_mask": "0x1", 00:08:09.486 "workload": "verify", 00:08:09.486 "status": "finished", 00:08:09.486 "verify_range": { 00:08:09.486 "start": 0, 00:08:09.486 "length": 16384 00:08:09.486 }, 00:08:09.486 "queue_depth": 1024, 00:08:09.486 "io_size": 4096, 00:08:09.486 "runtime": 10.064025, 00:08:09.486 "iops": 12478.705090657068, 00:08:09.486 "mibps": 48.74494176037917, 00:08:09.486 "io_failed": 0, 00:08:09.486 "io_timeout": 0, 00:08:09.486 "avg_latency_us": 81803.27546343124, 00:08:09.486 "min_latency_us": 19348.72380952381, 00:08:09.486 "max_latency_us": 52428.8 00:08:09.486 } 00:08:09.486 ], 00:08:09.486 "core_count": 1 00:08:09.486 } 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1084447 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1084447 ']' 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1084447 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084447 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084447' 00:08:09.486 killing process with pid 1084447 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1084447 00:08:09.486 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.486 00:08:09.486 Latency(us) 00:08:09.486 [2024-10-15T10:48:29.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.486 [2024-10-15T10:48:29.805Z] =================================================================================================================== 00:08:09.486 [2024-10-15T10:48:29.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.486 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1084447 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.746 rmmod nvme_tcp 00:08:09.746 rmmod nvme_fabrics 00:08:09.746 rmmod nvme_keyring 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1084428 ']' 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1084428 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1084428 ']' 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1084428 00:08:09.746 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084428 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084428' 00:08:09.746 killing process with pid 1084428 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1084428 00:08:09.746 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1084428 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.005 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.542 00:08:12.542 real 0m19.784s 00:08:12.542 user 0m23.031s 00:08:12.542 sys 0m6.138s 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.542 ************************************ 00:08:12.542 END TEST nvmf_queue_depth 00:08:12.542 ************************************ 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.542 ************************************ 00:08:12.542 START TEST nvmf_target_multipath 00:08:12.542 ************************************ 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:12.542 * Looking for test storage... 00:08:12.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.542 --rc genhtml_branch_coverage=1 00:08:12.542 --rc genhtml_function_coverage=1 00:08:12.542 --rc genhtml_legend=1 00:08:12.542 --rc geninfo_all_blocks=1 00:08:12.542 --rc geninfo_unexecuted_blocks=1 00:08:12.542 00:08:12.542 ' 00:08:12.542 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.542 --rc genhtml_branch_coverage=1 00:08:12.542 --rc genhtml_function_coverage=1 00:08:12.542 --rc genhtml_legend=1 00:08:12.543 --rc geninfo_all_blocks=1 00:08:12.543 --rc geninfo_unexecuted_blocks=1 00:08:12.543 00:08:12.543 ' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.543 --rc genhtml_branch_coverage=1 00:08:12.543 --rc genhtml_function_coverage=1 00:08:12.543 --rc genhtml_legend=1 00:08:12.543 --rc geninfo_all_blocks=1 00:08:12.543 --rc geninfo_unexecuted_blocks=1 00:08:12.543 00:08:12.543 ' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.543 --rc genhtml_branch_coverage=1 00:08:12.543 --rc genhtml_function_coverage=1 00:08:12.543 --rc genhtml_legend=1 00:08:12.543 --rc geninfo_all_blocks=1 00:08:12.543 --rc geninfo_unexecuted_blocks=1 00:08:12.543 00:08:12.543 ' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.543 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.113 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.113 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.113 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.113 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:19.113 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:08:19.114 00:08:19.114 --- 10.0.0.2 ping statistics --- 00:08:19.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.114 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:08:19.114 00:08:19.114 --- 10.0.0.1 ping statistics --- 00:08:19.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.114 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:19.114 only one NIC for nvmf test 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.114 rmmod nvme_tcp 00:08:19.114 rmmod nvme_fabrics 00:08:19.114 rmmod nvme_keyring 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.114 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.491 00:08:20.491 real 0m8.411s 00:08:20.491 user 0m1.902s 00:08:20.491 sys 0m4.517s 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.491 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.491 ************************************ 00:08:20.491 END TEST nvmf_target_multipath 00:08:20.491 ************************************ 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.750 ************************************ 00:08:20.750 START TEST nvmf_zcopy 00:08:20.750 ************************************ 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:20.750 * Looking for test storage... 00:08:20.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.750 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.750 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.751 --rc genhtml_branch_coverage=1 00:08:20.751 --rc genhtml_function_coverage=1 00:08:20.751 --rc genhtml_legend=1 00:08:20.751 --rc geninfo_all_blocks=1 00:08:20.751 --rc geninfo_unexecuted_blocks=1 00:08:20.751 00:08:20.751 ' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.751 --rc genhtml_branch_coverage=1 00:08:20.751 --rc genhtml_function_coverage=1 00:08:20.751 --rc genhtml_legend=1 00:08:20.751 --rc geninfo_all_blocks=1 00:08:20.751 --rc geninfo_unexecuted_blocks=1 00:08:20.751 00:08:20.751 ' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.751 --rc genhtml_branch_coverage=1 00:08:20.751 --rc genhtml_function_coverage=1 00:08:20.751 --rc genhtml_legend=1 00:08:20.751 --rc geninfo_all_blocks=1 00:08:20.751 --rc geninfo_unexecuted_blocks=1 00:08:20.751 00:08:20.751 ' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.751 --rc genhtml_branch_coverage=1 00:08:20.751 --rc genhtml_function_coverage=1 00:08:20.751 --rc genhtml_legend=1 00:08:20.751 --rc geninfo_all_blocks=1 00:08:20.751 --rc geninfo_unexecuted_blocks=1 00:08:20.751 00:08:20.751 ' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.751 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.324 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:27.325 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:27.325 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:27.325 Found net devices under 0000:86:00.0: cvl_0_0 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:27.325 Found net devices under 0000:86:00.1: cvl_0_1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.325 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:08:27.325 00:08:27.325 --- 10.0.0.2 ping statistics --- 00:08:27.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.325 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:08:27.325 00:08:27.325 --- 10.0.0.1 ping statistics --- 00:08:27.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.325 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1093375 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1093375 00:08:27.325 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1093375 ']' 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 [2024-10-15 12:48:47.152653] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:08:27.326 [2024-10-15 12:48:47.152698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.326 [2024-10-15 12:48:47.223621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.326 [2024-10-15 12:48:47.264158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.326 [2024-10-15 12:48:47.264194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.326 [2024-10-15 12:48:47.264202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.326 [2024-10-15 12:48:47.264208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.326 [2024-10-15 12:48:47.264213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.326 [2024-10-15 12:48:47.264777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 [2024-10-15 12:48:47.399783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 [2024-10-15 12:48:47.419983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 malloc0 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:27.326 { 00:08:27.326 "params": { 00:08:27.326 "name": "Nvme$subsystem", 00:08:27.326 "trtype": "$TEST_TRANSPORT", 00:08:27.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.326 "adrfam": "ipv4", 00:08:27.326 "trsvcid": "$NVMF_PORT", 00:08:27.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.326 "hdgst": ${hdgst:-false}, 00:08:27.326 "ddgst": ${ddgst:-false} 00:08:27.326 }, 00:08:27.326 "method": "bdev_nvme_attach_controller" 00:08:27.326 } 00:08:27.326 EOF 00:08:27.326 )") 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:27.326 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:27.326 "params": { 00:08:27.326 "name": "Nvme1", 00:08:27.326 "trtype": "tcp", 00:08:27.326 "traddr": "10.0.0.2", 00:08:27.326 "adrfam": "ipv4", 00:08:27.326 "trsvcid": "4420", 00:08:27.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.326 "hdgst": false, 00:08:27.326 "ddgst": false 00:08:27.326 }, 00:08:27.326 "method": "bdev_nvme_attach_controller" 00:08:27.326 }' 00:08:27.326 [2024-10-15 12:48:47.499645] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:08:27.326 [2024-10-15 12:48:47.499686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093526 ] 00:08:27.326 [2024-10-15 12:48:47.557024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.326 [2024-10-15 12:48:47.597211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.585 Running I/O for 10 seconds... 00:08:29.528 8570.00 IOPS, 66.95 MiB/s [2024-10-15T10:48:51.222Z] 8675.00 IOPS, 67.77 MiB/s [2024-10-15T10:48:52.159Z] 8652.00 IOPS, 67.59 MiB/s [2024-10-15T10:48:53.110Z] 8664.25 IOPS, 67.69 MiB/s [2024-10-15T10:48:54.043Z] 8682.80 IOPS, 67.83 MiB/s [2024-10-15T10:48:54.977Z] 8702.67 IOPS, 67.99 MiB/s [2024-10-15T10:48:55.913Z] 8715.29 IOPS, 68.09 MiB/s [2024-10-15T10:48:57.292Z] 8725.62 IOPS, 68.17 MiB/s [2024-10-15T10:48:57.878Z] 8730.89 IOPS, 68.21 MiB/s [2024-10-15T10:48:58.155Z] 8740.40 IOPS, 68.28 MiB/s 00:08:37.836 Latency(us) 00:08:37.836 [2024-10-15T10:48:58.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.836 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:37.836 Verification LBA range: start 0x0 length 0x1000 00:08:37.836 Nvme1n1 : 10.01 8741.54 68.29 0.00 0.00 14601.60 1654.00 23218.47 00:08:37.836 [2024-10-15T10:48:58.155Z] =================================================================================================================== 00:08:37.836 [2024-10-15T10:48:58.155Z] Total : 8741.54 68.29 0.00 0.00 14601.60 1654.00 23218.47 00:08:37.836 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1095236 00:08:37.836 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:37.836 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:37.836 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.837 { 00:08:37.837 "params": { 00:08:37.837 "name": "Nvme$subsystem", 00:08:37.837 "trtype": "$TEST_TRANSPORT", 00:08:37.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.837 "adrfam": "ipv4", 00:08:37.837 "trsvcid": "$NVMF_PORT", 00:08:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.837 "hdgst": ${hdgst:-false}, 00:08:37.837 "ddgst": ${ddgst:-false} 00:08:37.837 }, 00:08:37.837 "method": "bdev_nvme_attach_controller" 00:08:37.837 } 00:08:37.837 EOF 00:08:37.837 )") 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:37.837 [2024-10-15 12:48:58.038777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.038810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:37.837 12:48:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.837 "params": { 00:08:37.837 "name": "Nvme1", 00:08:37.837 "trtype": "tcp", 00:08:37.837 "traddr": "10.0.0.2", 00:08:37.837 "adrfam": "ipv4", 00:08:37.837 "trsvcid": "4420", 00:08:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.837 "hdgst": false, 00:08:37.837 "ddgst": false 00:08:37.837 }, 00:08:37.837 "method": "bdev_nvme_attach_controller" 00:08:37.837 }' 00:08:37.837 [2024-10-15 12:48:58.050782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.050796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.062809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.062821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.074838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.074848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.079525] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:08:37.837 [2024-10-15 12:48:58.079566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095236 ] 00:08:37.837 [2024-10-15 12:48:58.086878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.086889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.098898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.098908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.110935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.110950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.122967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.122979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.134996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.135006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.147030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.837 [2024-10-15 12:48:58.147039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.837 [2024-10-15 12:48:58.148847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.095 [2024-10-15 12:48:58.159065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.159078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.171094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.171105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.183131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.183145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.189932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.095 [2024-10-15 12:48:58.195160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.195171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.207204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.207223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.219230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.219250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.231259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.231275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.243286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.243300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.255324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.255338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.267348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.095 [2024-10-15 12:48:58.267358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.095 [2024-10-15 12:48:58.279395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.279416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.291421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.291436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.303455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.303471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.315483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.315495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.327519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.327536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.339550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.339561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.351587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.351607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.363628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.363641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.375658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.375669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.387688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.387699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.399724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.399738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.096 [2024-10-15 12:48:58.411756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.096 [2024-10-15 12:48:58.411766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.355 [2024-10-15 12:48:58.423790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.355 [2024-10-15 12:48:58.423803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.355 [2024-10-15 12:48:58.435819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.435830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.447854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.447868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.459895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.459906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.471921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.471931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.483956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.483969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.533863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.533883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.544124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.544138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 Running I/O for 5 seconds... 00:08:38.356 [2024-10-15 12:48:58.560681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.560701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.571637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.571657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.585729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.585748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.594681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.594701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.603456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.603474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.617362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.617382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.626463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.626482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.636006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.636026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.644782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.644801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.653902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.653921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.356 [2024-10-15 12:48:58.668284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.356 [2024-10-15 12:48:58.668307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.682218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.682239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.695834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.695864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.704902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.704920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.713745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.713764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.728158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.728176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.737006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.737024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.746238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.746256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.760415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.760433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.769283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.769302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.783616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.783635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.792255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.792275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.801435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.801453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.811227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.811246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.820582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.820607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.835552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.835570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.850748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.850767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.860092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.860110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.868905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.868924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.883460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.883479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.897019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.897037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.911038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.911057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.920346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.920365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.615 [2024-10-15 12:48:58.934576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.615 [2024-10-15 12:48:58.934596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:58.943432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:58.943451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:58.957675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:58.957694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:58.971014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:58.971032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:58.984707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:58.984726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:58.993530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:58.993549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.002986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.003005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.017740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.017759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.026654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.026673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.035805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.035824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.050258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.050275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.059227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.059245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.073447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.073466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.082499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.082517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.091887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.091905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.101180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.101199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.110373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.110391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.124509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.124528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.137318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.137337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.151422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.151440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.164450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.164469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.178392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.178410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-15 12:48:59.192030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-15 12:48:59.192048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.206156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.206176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.219985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.220004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.234392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.234411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.245185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.245204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.259555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.259573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.268364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.268383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.277686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.277705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.286916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.286935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.295810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.295828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.305043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.134 [2024-10-15 12:48:59.305061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.134 [2024-10-15 12:48:59.314096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.314114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.323484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.323503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.331994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.332012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.341100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.341118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.355379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.355398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.364243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.364260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.373053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.373072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.387774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.387801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.398488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.398507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.412404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.412423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.426359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.426379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.437291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.437310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.446470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.446492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.135 [2024-10-15 12:48:59.455729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.135 [2024-10-15 12:48:59.455747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.470501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.470521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.481077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.481096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.489797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.489815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.498851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.498869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.508192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.508211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.522688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.522706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.531762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.531781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.541476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.541494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 16735.00 IOPS, 130.74 MiB/s [2024-10-15T10:48:59.713Z] [2024-10-15 12:48:59.550084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.550102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.564415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.564434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.578390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.578409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.587135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.587155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.596329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.596347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.605490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.605508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.615243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.615261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.629648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.629666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.638607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.638642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.647562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.647585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.656591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.656619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.666133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.666151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.680365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.680385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.689183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.689203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.394 [2024-10-15 12:48:59.703319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.394 [2024-10-15 12:48:59.703339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.717070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.717091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.726071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.726091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.740334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.740358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.749533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.749551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.763656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.763676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.772508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.772529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.786275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.786294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.799780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.799799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.808553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.808572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.822744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.822763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.835996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.836015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.845358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.845376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.859623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.859641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.873534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.873558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.887577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.887595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.898807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.898826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.912921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.912944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.926631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.926650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.939876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.939895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.953978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.953997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.654 [2024-10-15 12:48:59.967550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.654 [2024-10-15 12:48:59.967569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:48:59.981632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:48:59.981652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:48:59.995226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:48:59.995245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.008968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.008988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.023133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.023153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.038040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.038062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.053750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.053769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.061709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.061727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.071944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.913 [2024-10-15 12:49:00.071962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.913 [2024-10-15 12:49:00.081395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.081414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.090616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.090635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.100007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.100025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.109194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.109214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.118386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.118404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.127679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.127698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.141598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.141622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.155415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.155433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.169516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.169534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.179808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.179825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.194201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.194220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.207607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.207625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.221422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.221440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.914 [2024-10-15 12:49:00.235535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.914 [2024-10-15 12:49:00.235554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.249483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.249503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.263521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.263539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.277273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.277291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.291268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.291286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.304991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.305010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.318991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.319009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.333187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.333205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.347203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.347221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.358678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.358696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.372165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.372182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.386192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.386210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.399707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.399726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.413563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.413580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.427456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.427475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.441495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.441514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.455553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.455571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.469552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.469570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.173 [2024-10-15 12:49:00.483197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.173 [2024-10-15 12:49:00.483215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.432 [2024-10-15 12:49:00.497133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.432 [2024-10-15 12:49:00.497152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.432 [2024-10-15 12:49:00.510644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.432 [2024-10-15 12:49:00.510663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.524835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.524864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.538565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.538583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 16793.00 IOPS, 131.20 MiB/s [2024-10-15T10:49:00.752Z] [2024-10-15 12:49:00.552788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.552806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.566221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.566238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.580155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.580173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.593763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.593781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.608176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.608199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.618991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.619009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.633255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.633273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.647126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.647144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.660956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.660974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.675225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.675242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.690981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.690999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.704773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.704791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.719000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.719019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.730228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.730246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.433 [2024-10-15 12:49:00.744403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.433 [2024-10-15 12:49:00.744423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.757796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.757815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.771423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.771442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.785219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.785238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.798875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.798893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.812204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.812222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.826501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.826519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.837372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.837390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.851725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.851743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.865821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.865844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.879467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.879485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.893868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.893886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.909528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.909546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.923560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.923579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.937569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.937587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.951141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.951161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.965266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.965284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.979203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.979221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:00.993090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:00.993108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.692 [2024-10-15 12:49:01.002299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.692 [2024-10-15 12:49:01.002317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.951 [2024-10-15 12:49:01.016721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.951 [2024-10-15 12:49:01.016740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.951 [2024-10-15 12:49:01.030386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.030404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.044564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.044582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.058611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.058630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.067292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.067312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.076496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.076516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.085866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.085885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.100882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.100902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.111530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.111553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.121226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.121245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.135590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.135616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.149489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.149507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.163538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.163557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.177531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.177549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.191519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.191539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.205385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.205404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.219466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.219485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.233508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.233527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.247244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.247262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.952 [2024-10-15 12:49:01.260645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.952 [2024-10-15 12:49:01.260665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.274682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.274702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.288041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.288060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.301937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.301956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.315405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.315424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.329655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.329674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.340816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.340834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.355186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.355205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.368940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.368963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.382941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.382960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.396427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.396446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.410243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.410261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.424095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.424114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.438095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.438114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.452022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.452040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.465831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.465850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.479705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.479723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.493514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.493532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.507103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.507121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.212 [2024-10-15 12:49:01.520774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.212 [2024-10-15 12:49:01.520793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.534505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.534525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.548299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.548318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 16816.00 IOPS, 131.38 MiB/s [2024-10-15T10:49:01.790Z] [2024-10-15 12:49:01.562138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.562156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.576035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.576053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.589628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.589646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.603270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.603288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.617232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.617249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.630798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.630817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.644414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.644433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.658589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.658614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.672421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.672440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.686275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.686294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.700032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.700050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.713805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.713823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.727882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.727901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.741588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.741610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.755423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.755442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.769120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.769140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.471 [2024-10-15 12:49:01.782839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.471 [2024-10-15 12:49:01.782857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.797468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.797488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.812509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.812527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.826562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.826580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.840392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.840410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.854386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.854404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.868487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.868505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.882660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.731 [2024-10-15 12:49:01.882678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.731 [2024-10-15 12:49:01.893593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.893617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.907527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.907546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.921307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.921326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.935775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.935795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.949893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.949912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.963452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.963471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.977525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.977544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:01.991076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:01.991096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:02.005091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:02.005109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:02.019197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:02.019214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:02.032867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:02.032885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.732 [2024-10-15 12:49:02.046798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.732 [2024-10-15 12:49:02.046816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.060818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.060837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.075021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.075039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.089030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.089049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.103129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.103147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.114001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.114020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.128007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.128026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.141343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.141361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.155386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.155405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.166008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.166026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.179974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.179992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.193504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.193522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.207155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.207174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.221107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.221126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.234739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.234757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.248410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.248428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.262396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.262414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.276698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.276716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.292109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.292128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.991 [2024-10-15 12:49:02.306048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.991 [2024-10-15 12:49:02.306067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.319712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.319731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.333200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.333219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.347218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.347236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.361309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.361327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.375051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.375069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.388518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.388535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.402312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.402335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.416410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.416428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.430144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.430163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.444219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.444239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.458157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.458177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.469140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.469161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.483397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.483417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.497507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.497527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.250 [2024-10-15 12:49:02.513246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.250 [2024-10-15 12:49:02.513265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.251 [2024-10-15 12:49:02.527024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.251 [2024-10-15 12:49:02.527043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.251 [2024-10-15 12:49:02.541144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.251 [2024-10-15 12:49:02.541163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.251 [2024-10-15 12:49:02.554309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.251 [2024-10-15 12:49:02.554328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.251 16826.50 IOPS, 131.46 MiB/s [2024-10-15T10:49:02.570Z] [2024-10-15 12:49:02.568823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.251 [2024-10-15 12:49:02.568842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.585196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.585216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.598950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.598969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.612619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.612638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.626127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.626146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.635853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.635871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.650239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.650259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.664000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.664026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.677990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.678009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.691773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.691791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.706074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.706093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.717109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.717127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.731584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.731611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.745166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.745184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.759610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.759646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.774806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.774825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.788693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.788713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.802494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.802514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.812081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.812100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.510 [2024-10-15 12:49:02.826250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.510 [2024-10-15 12:49:02.826268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.840642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.840660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.856420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.856438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.870259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.870277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.883697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.883715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.897905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.897923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.911820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.911838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.926000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.926023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.937373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.937391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.951608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.951625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.965251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.965269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.979299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.979317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:02.993071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:02.993091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.007155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.007174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.021379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.021397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.035321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.035340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.049169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.049188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.063157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.063176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.077343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.077362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.769 [2024-10-15 12:49:03.090706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.769 [2024-10-15 12:49:03.090725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.104637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.104657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.118878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.118898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.132933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.132952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.146680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.146698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.160624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.160643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.174554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.174572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.188667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.188686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.202585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.202610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.216440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.216458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.230047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.230065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.243707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.243725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.257218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.257236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.270933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.270951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.284893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.284911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.298553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.298571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.312590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.312613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.326340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.326359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.340508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.340526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.029 [2024-10-15 12:49:03.351013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.029 [2024-10-15 12:49:03.351031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.365012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.365030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.378559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.378577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.392346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.392364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.406374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.406392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.417553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.417571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.431849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.431877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.445698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.445716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.459622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.459641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.473474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.473492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.487374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.487392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.501328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.501346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.515167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.515186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.529233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.529252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.543418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.543437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.557111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.557129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 16837.40 IOPS, 131.54 MiB/s 00:08:43.289 Latency(us) 00:08:43.289 [2024-10-15T10:49:03.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.289 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:43.289 Nvme1n1 : 5.01 16843.66 131.59 0.00 0.00 7593.22 3386.03 19723.22 00:08:43.289 [2024-10-15T10:49:03.608Z] =================================================================================================================== 00:08:43.289 [2024-10-15T10:49:03.608Z] Total : 16843.66 131.59 0.00 0.00 7593.22 3386.03 19723.22 00:08:43.289 [2024-10-15 12:49:03.567139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.567157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.579185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.579200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.591225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.591241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.289 [2024-10-15 12:49:03.603254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.289 [2024-10-15 12:49:03.603270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.615285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.615300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.627318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.627334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.639347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.639368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.651376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.651389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.663410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.663424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.675439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.675453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.687473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.687484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.699505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.699517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.711533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.711544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 [2024-10-15 12:49:03.723574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.548 [2024-10-15 12:49:03.723585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1095236) - No such process 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1095236 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.548 delay0 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.548 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:43.807 [2024-10-15 12:49:03.903686] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:50.375 [2024-10-15 12:49:09.955106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af28e0 is same with the state(6) to be set 00:08:50.375 Initializing NVMe Controllers 00:08:50.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:50.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:50.375 Initialization complete. Launching workers. 00:08:50.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:08:50.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:08:50.375 success 152, unsuccessful 205, failed 0 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.375 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.375 rmmod nvme_tcp 00:08:50.375 rmmod nvme_fabrics 00:08:50.375 rmmod nvme_keyring 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1093375 ']' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1093375 ']' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093375' 00:08:50.375 killing process with pid 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1093375 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.375 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.282 00:08:52.282 real 0m31.464s 00:08:52.282 user 0m42.056s 00:08:52.282 sys 0m11.086s 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.282 ************************************ 00:08:52.282 END TEST nvmf_zcopy 00:08:52.282 ************************************ 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.282 ************************************ 00:08:52.282 START TEST nvmf_nmic 00:08:52.282 ************************************ 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:52.282 * Looking for test storage... 00:08:52.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.282 --rc genhtml_branch_coverage=1 00:08:52.282 --rc genhtml_function_coverage=1 00:08:52.282 --rc genhtml_legend=1 00:08:52.282 --rc geninfo_all_blocks=1 00:08:52.282 --rc geninfo_unexecuted_blocks=1 00:08:52.282 00:08:52.282 ' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.282 --rc genhtml_branch_coverage=1 00:08:52.282 --rc genhtml_function_coverage=1 00:08:52.282 --rc genhtml_legend=1 00:08:52.282 --rc geninfo_all_blocks=1 00:08:52.282 --rc geninfo_unexecuted_blocks=1 00:08:52.282 00:08:52.282 ' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.282 --rc genhtml_branch_coverage=1 00:08:52.282 --rc genhtml_function_coverage=1 00:08:52.282 --rc genhtml_legend=1 00:08:52.282 --rc geninfo_all_blocks=1 00:08:52.282 --rc geninfo_unexecuted_blocks=1 00:08:52.282 00:08:52.282 ' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.282 --rc genhtml_branch_coverage=1 00:08:52.282 --rc genhtml_function_coverage=1 00:08:52.282 --rc genhtml_legend=1 00:08:52.282 --rc geninfo_all_blocks=1 00:08:52.282 --rc geninfo_unexecuted_blocks=1 00:08:52.282 00:08:52.282 ' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.282 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.283 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.542 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:52.542 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:52.542 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.542 12:49:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:59.121 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:59.121 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.121 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:59.122 Found net devices under 0000:86:00.0: cvl_0_0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:59.122 Found net devices under 0000:86:00.1: cvl_0_1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:08:59.122 00:08:59.122 --- 10.0.0.2 ping statistics --- 00:08:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.122 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:59.122 00:08:59.122 --- 10.0.0.1 ping statistics --- 00:08:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.122 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1100848 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1100848 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1100848 ']' 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.122 [2024-10-15 12:49:18.628560] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:08:59.122 [2024-10-15 12:49:18.628616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.122 [2024-10-15 12:49:18.701733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.122 [2024-10-15 12:49:18.743746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.122 [2024-10-15 12:49:18.743784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.122 [2024-10-15 12:49:18.743791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.122 [2024-10-15 12:49:18.743797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.122 [2024-10-15 12:49:18.743803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.122 [2024-10-15 12:49:18.745343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.122 [2024-10-15 12:49:18.745451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.122 [2024-10-15 12:49:18.745537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.122 [2024-10-15 12:49:18.745538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.122 [2024-10-15 12:49:18.890245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.122 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 Malloc0 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 [2024-10-15 12:49:18.964339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:59.123 test case1: single bdev can't be used in multiple subsystems 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 [2024-10-15 12:49:18.988200] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:59.123 [2024-10-15 12:49:18.988221] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:59.123 [2024-10-15 12:49:18.988228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.123 request: 00:08:59.123 { 00:08:59.123 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.123 "namespace": { 00:08:59.123 "bdev_name": "Malloc0", 00:08:59.123 "no_auto_visible": false 00:08:59.123 }, 00:08:59.123 "method": "nvmf_subsystem_add_ns", 00:08:59.123 "req_id": 1 00:08:59.123 } 00:08:59.123 Got JSON-RPC error response 00:08:59.123 response: 00:08:59.123 { 00:08:59.123 "code": -32602, 00:08:59.123 "message": "Invalid parameters" 00:08:59.123 } 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:59.123 Adding namespace failed - expected result. 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:59.123 test case2: host connect to nvmf target in multiple paths 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.123 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.123 [2024-10-15 12:49:19.000348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:59.123 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.123 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.061 12:49:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:01.000 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.000 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:01.000 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.000 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:01.000 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:02.905 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:03.186 12:49:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:03.186 [global] 00:09:03.186 thread=1 00:09:03.186 invalidate=1 00:09:03.186 rw=write 00:09:03.186 time_based=1 00:09:03.186 runtime=1 00:09:03.186 ioengine=libaio 00:09:03.186 direct=1 00:09:03.186 bs=4096 00:09:03.186 iodepth=1 00:09:03.186 norandommap=0 00:09:03.186 numjobs=1 00:09:03.186 00:09:03.186 verify_dump=1 00:09:03.186 verify_backlog=512 00:09:03.186 verify_state_save=0 00:09:03.186 do_verify=1 00:09:03.186 verify=crc32c-intel 00:09:03.186 [job0] 00:09:03.186 filename=/dev/nvme0n1 00:09:03.186 Could not set queue depth (nvme0n1) 00:09:03.446 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.446 fio-3.35 00:09:03.446 Starting 1 thread 00:09:04.383 00:09:04.383 job0: (groupid=0, jobs=1): err= 0: pid=1101835: Tue Oct 15 12:49:24 2024 00:09:04.383 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:09:04.383 slat (nsec): min=9349, max=24333, avg=22931.82, stdev=3044.44 00:09:04.383 clat (usec): min=40914, max=41384, avg=40990.61, stdev=92.77 00:09:04.383 lat (usec): min=40938, max=41393, avg=41013.54, stdev=89.93 00:09:04.383 clat percentiles (usec): 00:09:04.383 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:04.383 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:04.383 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:04.383 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:04.383 | 99.99th=[41157] 00:09:04.383 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:04.383 slat (usec): min=9, max=27490, avg=64.27, stdev=1214.46 00:09:04.383 clat (usec): min=112, max=363, avg=138.76, stdev=31.52 00:09:04.383 lat (usec): min=122, max=27778, avg=203.03, stdev=1221.44 00:09:04.383 clat percentiles (usec): 00:09:04.383 | 1.00th=[ 120], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 123], 00:09:04.383 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:09:04.383 | 70.00th=[ 133], 80.00th=[ 151], 90.00th=[ 163], 95.00th=[ 241], 00:09:04.383 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 363], 99.95th=[ 363], 00:09:04.383 | 99.99th=[ 363] 00:09:04.383 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:04.383 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:04.383 lat (usec) : 250=95.51%, 500=0.37% 00:09:04.383 lat (msec) : 50=4.12% 00:09:04.383 cpu : usr=0.10%, sys=0.70%, ctx=537, majf=0, minf=1 00:09:04.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.383 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.383 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.383 00:09:04.383 Run status group 0 (all jobs): 00:09:04.383 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:09:04.383 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:09:04.383 00:09:04.383 Disk stats (read/write): 00:09:04.383 nvme0n1: ios=45/512, merge=0/0, ticks=1765/69, in_queue=1834, util=98.50% 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.642 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.642 rmmod nvme_tcp 00:09:04.642 rmmod nvme_fabrics 00:09:04.901 rmmod nvme_keyring 00:09:04.901 12:49:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1100848 ']' 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1100848 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1100848 ']' 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1100848 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100848 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100848' 00:09:04.901 killing process with pid 1100848 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1100848 00:09:04.901 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1100848 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.161 12:49:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.073 00:09:07.073 real 0m14.936s 00:09:07.073 user 0m33.115s 00:09:07.073 sys 0m5.287s 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.073 ************************************ 00:09:07.073 END TEST nvmf_nmic 00:09:07.073 ************************************ 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.073 12:49:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 ************************************ 00:09:07.332 START TEST nvmf_fio_target 00:09:07.332 ************************************ 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:07.333 * Looking for test storage... 00:09:07.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.333 --rc genhtml_branch_coverage=1 00:09:07.333 --rc genhtml_function_coverage=1 00:09:07.333 --rc genhtml_legend=1 00:09:07.333 --rc geninfo_all_blocks=1 00:09:07.333 --rc geninfo_unexecuted_blocks=1 00:09:07.333 00:09:07.333 ' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.333 --rc genhtml_branch_coverage=1 00:09:07.333 --rc genhtml_function_coverage=1 00:09:07.333 --rc genhtml_legend=1 00:09:07.333 --rc geninfo_all_blocks=1 00:09:07.333 --rc geninfo_unexecuted_blocks=1 00:09:07.333 00:09:07.333 ' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.333 --rc genhtml_branch_coverage=1 00:09:07.333 --rc genhtml_function_coverage=1 00:09:07.333 --rc genhtml_legend=1 00:09:07.333 --rc geninfo_all_blocks=1 00:09:07.333 --rc geninfo_unexecuted_blocks=1 00:09:07.333 00:09:07.333 ' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.333 --rc genhtml_branch_coverage=1 00:09:07.333 --rc genhtml_function_coverage=1 00:09:07.333 --rc genhtml_legend=1 00:09:07.333 --rc geninfo_all_blocks=1 00:09:07.333 --rc geninfo_unexecuted_blocks=1 00:09:07.333 00:09:07.333 ' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.333 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.334 12:49:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:14.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:14.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:14.003 Found net devices under 0000:86:00.0: cvl_0_0 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:14.003 Found net devices under 0000:86:00.1: cvl_0_1 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.003 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:09:14.004 00:09:14.004 --- 10.0.0.2 ping statistics --- 00:09:14.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.004 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:14.004 00:09:14.004 --- 10.0.0.1 ping statistics --- 00:09:14.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.004 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1105695 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1105695 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1105695 ']' 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.004 [2024-10-15 12:49:33.664351] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:09:14.004 [2024-10-15 12:49:33.664396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.004 [2024-10-15 12:49:33.737972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.004 [2024-10-15 12:49:33.777671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.004 [2024-10-15 12:49:33.777710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.004 [2024-10-15 12:49:33.777717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.004 [2024-10-15 12:49:33.777723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.004 [2024-10-15 12:49:33.777728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.004 [2024-10-15 12:49:33.779301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.004 [2024-10-15 12:49:33.779411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.004 [2024-10-15 12:49:33.779495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.004 [2024-10-15 12:49:33.779496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.004 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.004 [2024-10-15 12:49:34.084949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.004 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.264 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:14.264 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.264 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:14.264 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.523 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:14.523 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.782 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:14.782 12:49:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:15.041 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.300 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:15.300 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.300 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:15.300 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.559 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:15.559 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:15.817 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.077 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.077 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.335 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.335 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.336 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.594 [2024-10-15 12:49:36.768633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.594 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:16.853 12:49:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:17.111 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:18.494 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.415 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.415 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.415 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.415 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:20.416 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.416 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:20.416 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.416 [global] 00:09:20.416 thread=1 00:09:20.416 invalidate=1 00:09:20.416 rw=write 00:09:20.416 time_based=1 00:09:20.416 runtime=1 00:09:20.416 ioengine=libaio 00:09:20.416 direct=1 00:09:20.416 bs=4096 00:09:20.416 iodepth=1 00:09:20.416 norandommap=0 00:09:20.416 numjobs=1 00:09:20.416 00:09:20.416 verify_dump=1 00:09:20.416 verify_backlog=512 00:09:20.416 verify_state_save=0 00:09:20.416 do_verify=1 00:09:20.416 verify=crc32c-intel 00:09:20.416 [job0] 00:09:20.416 filename=/dev/nvme0n1 00:09:20.416 [job1] 00:09:20.416 filename=/dev/nvme0n2 00:09:20.416 [job2] 00:09:20.416 filename=/dev/nvme0n3 00:09:20.416 [job3] 00:09:20.416 filename=/dev/nvme0n4 00:09:20.416 Could not set queue depth (nvme0n1) 00:09:20.416 Could not set queue depth (nvme0n2) 00:09:20.416 Could not set queue depth (nvme0n3) 00:09:20.416 Could not set queue depth (nvme0n4) 00:09:20.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.674 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.674 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.674 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.674 fio-3.35 00:09:20.674 Starting 4 threads 00:09:22.110 00:09:22.110 job0: (groupid=0, jobs=1): err= 0: pid=1107040: Tue Oct 15 12:49:41 2024 00:09:22.110 read: IOPS=1188, BW=4753KiB/s (4867kB/s)(4796KiB/1009msec) 00:09:22.110 slat (nsec): min=7107, max=25595, avg=8404.30, stdev=1910.91 00:09:22.110 clat (usec): min=176, max=41215, avg=597.93, stdev=3891.08 00:09:22.110 lat (usec): min=183, max=41223, avg=606.34, stdev=3891.43 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:09:22.110 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:09:22.110 | 70.00th=[ 231], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:09:22.110 | 99.00th=[ 334], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:22.110 | 99.99th=[41157] 00:09:22.110 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:09:22.110 slat (nsec): min=10488, max=55526, avg=11902.34, stdev=2143.91 00:09:22.110 clat (usec): min=118, max=271, avg=165.69, stdev=33.07 00:09:22.110 lat (usec): min=130, max=284, avg=177.60, stdev=33.25 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:09:22.110 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:09:22.110 | 70.00th=[ 172], 80.00th=[ 188], 90.00th=[ 235], 95.00th=[ 241], 00:09:22.110 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 273], 00:09:22.110 | 99.99th=[ 273] 00:09:22.110 bw ( KiB/s): min= 5240, max= 7048, per=22.96%, avg=6144.00, stdev=1278.45, samples=2 00:09:22.110 iops : min= 1310, max= 1762, avg=1536.00, stdev=319.61, samples=2 00:09:22.110 lat (usec) : 250=90.57%, 500=9.03% 00:09:22.110 lat (msec) : 50=0.40% 00:09:22.110 cpu : usr=2.68%, sys=3.97%, ctx=2736, majf=0, minf=1 00:09:22.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 issued rwts: total=1199,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.110 job1: (groupid=0, jobs=1): err= 0: pid=1107042: Tue Oct 15 12:49:41 2024 00:09:22.110 read: IOPS=1510, BW=6041KiB/s (6186kB/s)(6180KiB/1023msec) 00:09:22.110 slat (nsec): min=6581, max=27665, avg=7443.98, stdev=1475.16 00:09:22.110 clat (usec): min=174, max=42032, avg=436.33, stdev=2948.16 00:09:22.110 lat (usec): min=181, max=42050, avg=443.77, stdev=2949.14 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:09:22.110 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:09:22.110 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:09:22.110 | 99.00th=[ 273], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:09:22.110 | 99.99th=[42206] 00:09:22.110 write: IOPS=2001, BW=8008KiB/s (8200kB/s)(8192KiB/1023msec); 0 zone resets 00:09:22.110 slat (nsec): min=9505, max=36907, avg=10701.39, stdev=1358.61 00:09:22.110 clat (usec): min=111, max=311, avg=150.11, stdev=19.11 00:09:22.110 lat (usec): min=121, max=347, avg=160.82, stdev=19.44 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 119], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 137], 00:09:22.110 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:09:22.110 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:09:22.110 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 277], 00:09:22.110 | 99.99th=[ 310] 00:09:22.110 bw ( KiB/s): min= 4096, max=12288, per=30.61%, avg=8192.00, stdev=5792.62, samples=2 00:09:22.110 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:09:22.110 lat (usec) : 250=93.46%, 500=6.32% 00:09:22.110 lat (msec) : 50=0.22% 00:09:22.110 cpu : usr=1.47%, sys=3.62%, ctx=3594, majf=0, minf=1 00:09:22.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 issued rwts: total=1545,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.110 job2: (groupid=0, jobs=1): err= 0: pid=1107048: Tue Oct 15 12:49:41 2024 00:09:22.110 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:22.110 slat (nsec): min=6372, max=26931, avg=7847.48, stdev=1810.97 00:09:22.110 clat (usec): min=185, max=41390, avg=411.16, stdev=2543.48 00:09:22.110 lat (usec): min=193, max=41397, avg=419.01, stdev=2543.48 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 223], 00:09:22.110 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:09:22.110 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 322], 00:09:22.110 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:09:22.110 | 99.99th=[41157] 00:09:22.110 write: IOPS=1963, BW=7852KiB/s (8041kB/s)(7860KiB/1001msec); 0 zone resets 00:09:22.110 slat (nsec): min=9299, max=64188, avg=11290.14, stdev=3322.72 00:09:22.110 clat (usec): min=122, max=276, avg=165.99, stdev=21.03 00:09:22.110 lat (usec): min=132, max=340, avg=177.28, stdev=21.68 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:09:22.110 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:09:22.110 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:09:22.110 | 99.00th=[ 219], 99.50th=[ 223], 99.90th=[ 269], 99.95th=[ 277], 00:09:22.110 | 99.99th=[ 277] 00:09:22.110 bw ( KiB/s): min= 8192, max= 8192, per=30.61%, avg=8192.00, stdev= 0.00, samples=1 00:09:22.110 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:22.110 lat (usec) : 250=80.63%, 500=19.19% 00:09:22.110 lat (msec) : 50=0.17% 00:09:22.110 cpu : usr=1.40%, sys=3.80%, ctx=3502, majf=0, minf=1 00:09:22.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 issued rwts: total=1536,1965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.110 job3: (groupid=0, jobs=1): err= 0: pid=1107049: Tue Oct 15 12:49:41 2024 00:09:22.110 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:22.110 slat (nsec): min=6686, max=32758, avg=8703.92, stdev=3233.70 00:09:22.110 clat (usec): min=190, max=41286, avg=719.43, stdev=4269.16 00:09:22.110 lat (usec): min=197, max=41294, avg=728.13, stdev=4269.77 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:09:22.110 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:09:22.110 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 347], 00:09:22.110 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:22.110 | 99.99th=[41157] 00:09:22.110 write: IOPS=1294, BW=5179KiB/s (5303kB/s)(5184KiB/1001msec); 0 zone resets 00:09:22.110 slat (nsec): min=9780, max=47369, avg=11327.16, stdev=2736.17 00:09:22.110 clat (usec): min=125, max=348, avg=180.54, stdev=30.04 00:09:22.110 lat (usec): min=136, max=376, avg=191.87, stdev=30.72 00:09:22.110 clat percentiles (usec): 00:09:22.110 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:22.110 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:09:22.110 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 255], 00:09:22.110 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 347], 00:09:22.110 | 99.99th=[ 347] 00:09:22.110 bw ( KiB/s): min= 8192, max= 8192, per=30.61%, avg=8192.00, stdev= 0.00, samples=1 00:09:22.110 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:22.110 lat (usec) : 250=82.16%, 500=17.28% 00:09:22.110 lat (msec) : 20=0.04%, 50=0.52% 00:09:22.110 cpu : usr=1.40%, sys=2.20%, ctx=2321, majf=0, minf=1 00:09:22.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.110 issued rwts: total=1024,1296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.110 00:09:22.110 Run status group 0 (all jobs): 00:09:22.110 READ: bw=20.3MiB/s (21.2MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=20.7MiB (21.7MB), run=1001-1023msec 00:09:22.110 WRITE: bw=26.1MiB/s (27.4MB/s), 5179KiB/s-8008KiB/s (5303kB/s-8200kB/s), io=26.7MiB (28.0MB), run=1001-1023msec 00:09:22.110 00:09:22.110 Disk stats (read/write): 00:09:22.110 nvme0n1: ios=1050/1307, merge=0/0, ticks=1570/210, in_queue=1780, util=98.00% 00:09:22.110 nvme0n2: ios=1591/2048, merge=0/0, ticks=1309/287, in_queue=1596, util=98.58% 00:09:22.110 nvme0n3: ios=1232/1536, merge=0/0, ticks=550/248, in_queue=798, util=89.04% 00:09:22.110 nvme0n4: ios=972/1024, merge=0/0, ticks=1615/182, in_queue=1797, util=98.63% 00:09:22.110 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:22.110 [global] 00:09:22.110 thread=1 00:09:22.110 invalidate=1 00:09:22.110 rw=randwrite 00:09:22.110 time_based=1 00:09:22.110 runtime=1 00:09:22.110 ioengine=libaio 00:09:22.110 direct=1 00:09:22.110 bs=4096 00:09:22.110 iodepth=1 00:09:22.111 norandommap=0 00:09:22.111 numjobs=1 00:09:22.111 00:09:22.111 verify_dump=1 00:09:22.111 verify_backlog=512 00:09:22.111 verify_state_save=0 00:09:22.111 do_verify=1 00:09:22.111 verify=crc32c-intel 00:09:22.111 [job0] 00:09:22.111 filename=/dev/nvme0n1 00:09:22.111 [job1] 00:09:22.111 filename=/dev/nvme0n2 00:09:22.111 [job2] 00:09:22.111 filename=/dev/nvme0n3 00:09:22.111 [job3] 00:09:22.111 filename=/dev/nvme0n4 00:09:22.111 Could not set queue depth (nvme0n1) 00:09:22.111 Could not set queue depth (nvme0n2) 00:09:22.111 Could not set queue depth (nvme0n3) 00:09:22.111 Could not set queue depth (nvme0n4) 00:09:22.111 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.111 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.111 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.111 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.111 fio-3.35 00:09:22.111 Starting 4 threads 00:09:23.489 00:09:23.489 job0: (groupid=0, jobs=1): err= 0: pid=1107423: Tue Oct 15 12:49:43 2024 00:09:23.489 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:09:23.489 slat (nsec): min=7909, max=23213, avg=16197.68, stdev=5624.14 00:09:23.489 clat (usec): min=40807, max=42015, avg=41159.73, stdev=401.63 00:09:23.489 lat (usec): min=40827, max=42023, avg=41175.92, stdev=398.82 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:23.489 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:23.489 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:23.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:23.489 | 99.99th=[42206] 00:09:23.489 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:23.489 slat (nsec): min=9447, max=39868, avg=11471.54, stdev=2900.89 00:09:23.489 clat (usec): min=126, max=903, avg=220.04, stdev=60.43 00:09:23.489 lat (usec): min=137, max=913, avg=231.51, stdev=60.71 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 159], 00:09:23.489 | 30.00th=[ 188], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:09:23.489 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 251], 00:09:23.489 | 99.00th=[ 363], 99.50th=[ 570], 99.90th=[ 906], 99.95th=[ 906], 00:09:23.489 | 99.99th=[ 906] 00:09:23.489 bw ( KiB/s): min= 4096, max= 4096, per=24.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.489 lat (usec) : 250=90.82%, 500=4.31%, 750=0.56%, 1000=0.19% 00:09:23.489 lat (msec) : 50=4.12% 00:09:23.489 cpu : usr=0.20%, sys=0.98%, ctx=534, majf=0, minf=1 00:09:23.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.489 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.489 job1: (groupid=0, jobs=1): err= 0: pid=1107424: Tue Oct 15 12:49:43 2024 00:09:23.489 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:23.489 slat (nsec): min=6615, max=41975, avg=8006.23, stdev=1707.46 00:09:23.489 clat (usec): min=156, max=2673, avg=206.36, stdev=68.40 00:09:23.489 lat (usec): min=171, max=2697, avg=214.37, stdev=68.83 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:09:23.489 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:09:23.489 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 249], 00:09:23.489 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 457], 99.95th=[ 1991], 00:09:23.489 | 99.99th=[ 2671] 00:09:23.489 write: IOPS=2798, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec); 0 zone resets 00:09:23.489 slat (nsec): min=9063, max=47863, avg=10992.43, stdev=2049.14 00:09:23.489 clat (usec): min=101, max=378, avg=144.89, stdev=29.19 00:09:23.489 lat (usec): min=120, max=415, avg=155.89, stdev=30.04 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:09:23.489 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:23.489 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 186], 95.00th=[ 215], 00:09:23.489 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 273], 99.95th=[ 293], 00:09:23.489 | 99.99th=[ 379] 00:09:23.489 bw ( KiB/s): min=12288, max=12288, per=72.67%, avg=12288.00, stdev= 0.00, samples=1 00:09:23.489 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:23.489 lat (usec) : 250=97.54%, 500=2.42% 00:09:23.489 lat (msec) : 2=0.02%, 4=0.02% 00:09:23.489 cpu : usr=3.50%, sys=7.70%, ctx=5361, majf=0, minf=1 00:09:23.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.489 issued rwts: total=2560,2801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.489 job2: (groupid=0, jobs=1): err= 0: pid=1107425: Tue Oct 15 12:49:43 2024 00:09:23.489 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:09:23.489 slat (nsec): min=9295, max=26054, avg=22208.39, stdev=3861.47 00:09:23.489 clat (usec): min=258, max=41149, avg=39194.38, stdev=8487.89 00:09:23.489 lat (usec): min=270, max=41173, avg=39216.58, stdev=8490.29 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[ 260], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:23.489 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:23.489 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:23.489 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:23.489 | 99.99th=[41157] 00:09:23.489 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:09:23.489 slat (nsec): min=10634, max=48196, avg=12049.37, stdev=2673.50 00:09:23.489 clat (usec): min=120, max=968, avg=212.87, stdev=60.23 00:09:23.489 lat (usec): min=147, max=979, avg=224.92, stdev=60.27 00:09:23.489 clat percentiles (usec): 00:09:23.489 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:09:23.489 | 30.00th=[ 184], 40.00th=[ 198], 50.00th=[ 212], 60.00th=[ 223], 00:09:23.489 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 281], 00:09:23.489 | 99.00th=[ 314], 99.50th=[ 627], 99.90th=[ 971], 99.95th=[ 971], 00:09:23.489 | 99.99th=[ 971] 00:09:23.489 bw ( KiB/s): min= 4096, max= 4096, per=24.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.489 lat (usec) : 250=84.67%, 500=10.47%, 750=0.56%, 1000=0.19% 00:09:23.489 lat (msec) : 50=4.11% 00:09:23.489 cpu : usr=0.10%, sys=1.28%, ctx=536, majf=0, minf=1 00:09:23.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.490 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.490 job3: (groupid=0, jobs=1): err= 0: pid=1107426: Tue Oct 15 12:49:43 2024 00:09:23.490 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:09:23.490 slat (nsec): min=10396, max=27413, avg=23489.13, stdev=4201.65 00:09:23.490 clat (usec): min=399, max=42046, avg=39443.44, stdev=8521.54 00:09:23.490 lat (usec): min=410, max=42068, avg=39466.93, stdev=8524.28 00:09:23.490 clat percentiles (usec): 00:09:23.490 | 1.00th=[ 400], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:23.490 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:23.490 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:23.490 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:23.490 | 99.99th=[42206] 00:09:23.490 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:23.490 slat (nsec): min=11074, max=39269, avg=13323.13, stdev=2463.04 00:09:23.490 clat (usec): min=153, max=326, avg=196.24, stdev=28.04 00:09:23.490 lat (usec): min=165, max=342, avg=209.56, stdev=28.43 00:09:23.490 clat percentiles (usec): 00:09:23.490 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:09:23.490 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 196], 00:09:23.490 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 249], 00:09:23.490 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 326], 00:09:23.490 | 99.99th=[ 326] 00:09:23.490 bw ( KiB/s): min= 4096, max= 4096, per=24.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.490 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.490 lat (usec) : 250=91.78%, 500=4.11% 00:09:23.490 lat (msec) : 50=4.11% 00:09:23.490 cpu : usr=0.30%, sys=1.18%, ctx=536, majf=0, minf=1 00:09:23.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.490 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.490 00:09:23.490 Run status group 0 (all jobs): 00:09:23.490 READ: bw=10.0MiB/s (10.5MB/s), 85.8KiB/s-9.99MiB/s (87.8kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1026msec 00:09:23.490 WRITE: bw=16.5MiB/s (17.3MB/s), 1996KiB/s-10.9MiB/s (2044kB/s-11.5MB/s), io=16.9MiB (17.8MB), run=1001-1026msec 00:09:23.490 00:09:23.490 Disk stats (read/write): 00:09:23.490 nvme0n1: ios=67/512, merge=0/0, ticks=712/111, in_queue=823, util=86.76% 00:09:23.490 nvme0n2: ios=2048/2527, merge=0/0, ticks=408/346, in_queue=754, util=86.88% 00:09:23.490 nvme0n3: ios=51/512, merge=0/0, ticks=1318/109, in_queue=1427, util=97.39% 00:09:23.490 nvme0n4: ios=77/512, merge=0/0, ticks=1196/95, in_queue=1291, util=98.32% 00:09:23.490 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:23.490 [global] 00:09:23.490 thread=1 00:09:23.490 invalidate=1 00:09:23.490 rw=write 00:09:23.490 time_based=1 00:09:23.490 runtime=1 00:09:23.490 ioengine=libaio 00:09:23.490 direct=1 00:09:23.490 bs=4096 00:09:23.490 iodepth=128 00:09:23.490 norandommap=0 00:09:23.490 numjobs=1 00:09:23.490 00:09:23.490 verify_dump=1 00:09:23.490 verify_backlog=512 00:09:23.490 verify_state_save=0 00:09:23.490 do_verify=1 00:09:23.490 verify=crc32c-intel 00:09:23.490 [job0] 00:09:23.490 filename=/dev/nvme0n1 00:09:23.490 [job1] 00:09:23.490 filename=/dev/nvme0n2 00:09:23.490 [job2] 00:09:23.490 filename=/dev/nvme0n3 00:09:23.490 [job3] 00:09:23.490 filename=/dev/nvme0n4 00:09:23.490 Could not set queue depth (nvme0n1) 00:09:23.490 Could not set queue depth (nvme0n2) 00:09:23.490 Could not set queue depth (nvme0n3) 00:09:23.490 Could not set queue depth (nvme0n4) 00:09:23.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.749 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.749 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.749 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.749 fio-3.35 00:09:23.749 Starting 4 threads 00:09:25.137 00:09:25.137 job0: (groupid=0, jobs=1): err= 0: pid=1107799: Tue Oct 15 12:49:45 2024 00:09:25.137 read: IOPS=2277, BW=9109KiB/s (9328kB/s)(9200KiB/1010msec) 00:09:25.137 slat (nsec): min=1447, max=17586k, avg=184307.74, stdev=1189659.80 00:09:25.137 clat (usec): min=2889, max=62419, avg=20431.62, stdev=12362.31 00:09:25.137 lat (usec): min=5372, max=62428, avg=20615.93, stdev=12434.12 00:09:25.137 clat percentiles (usec): 00:09:25.137 | 1.00th=[ 6915], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10814], 00:09:25.137 | 30.00th=[12780], 40.00th=[16057], 50.00th=[17433], 60.00th=[17957], 00:09:25.137 | 70.00th=[20317], 80.00th=[23200], 90.00th=[41157], 95.00th=[51643], 00:09:25.137 | 99.00th=[59507], 99.50th=[60556], 99.90th=[62653], 99.95th=[62653], 00:09:25.137 | 99.99th=[62653] 00:09:25.137 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:09:25.137 slat (usec): min=2, max=19538, avg=221.31, stdev=1047.05 00:09:25.137 clat (msec): min=4, max=110, avg=31.62, stdev=20.37 00:09:25.137 lat (msec): min=4, max=110, avg=31.84, stdev=20.50 00:09:25.137 clat percentiles (msec): 00:09:25.137 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 17], 00:09:25.137 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:09:25.137 | 70.00th=[ 35], 80.00th=[ 47], 90.00th=[ 53], 95.00th=[ 78], 00:09:25.137 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 110], 00:09:25.137 | 99.99th=[ 110] 00:09:25.137 bw ( KiB/s): min=10128, max=10352, per=16.78%, avg=10240.00, stdev=158.39, samples=2 00:09:25.137 iops : min= 2532, max= 2588, avg=2560.00, stdev=39.60, samples=2 00:09:25.137 lat (msec) : 4=0.02%, 10=7.06%, 20=37.82%, 50=43.15%, 100=11.17% 00:09:25.137 lat (msec) : 250=0.78% 00:09:25.137 cpu : usr=1.78%, sys=3.27%, ctx=306, majf=0, minf=1 00:09:25.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:25.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.137 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.137 job1: (groupid=0, jobs=1): err= 0: pid=1107800: Tue Oct 15 12:49:45 2024 00:09:25.137 read: IOPS=2546, BW=9.95MiB/s (10.4MB/s)(10.1MiB/1014msec) 00:09:25.137 slat (nsec): min=1794, max=9128.2k, avg=118896.80, stdev=725913.10 00:09:25.137 clat (usec): min=5986, max=44988, avg=12104.22, stdev=5716.02 00:09:25.137 lat (usec): min=5992, max=44998, avg=12223.12, stdev=5807.71 00:09:25.137 clat percentiles (usec): 00:09:25.137 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:25.137 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:09:25.137 | 70.00th=[10683], 80.00th=[11076], 90.00th=[18744], 95.00th=[25035], 00:09:25.137 | 99.00th=[39060], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:09:25.137 | 99.99th=[44827] 00:09:25.137 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:09:25.137 slat (usec): min=2, max=50585, avg=221.57, stdev=1534.18 00:09:25.137 clat (msec): min=3, max=134, avg=26.66, stdev=22.01 00:09:25.137 lat (msec): min=3, max=134, avg=26.89, stdev=22.21 00:09:25.137 clat percentiles (msec): 00:09:25.137 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:09:25.137 | 30.00th=[ 9], 40.00th=[ 14], 50.00th=[ 24], 60.00th=[ 25], 00:09:25.137 | 70.00th=[ 26], 80.00th=[ 44], 90.00th=[ 67], 95.00th=[ 75], 00:09:25.137 | 99.00th=[ 82], 99.50th=[ 84], 99.90th=[ 113], 99.95th=[ 134], 00:09:25.137 | 99.99th=[ 134] 00:09:25.137 bw ( KiB/s): min=10688, max=13040, per=19.44%, avg=11864.00, stdev=1663.12, samples=2 00:09:25.137 iops : min= 2672, max= 3260, avg=2966.00, stdev=415.78, samples=2 00:09:25.137 lat (msec) : 4=0.42%, 10=35.83%, 20=30.19%, 50=24.30%, 100=9.13% 00:09:25.138 lat (msec) : 250=0.12% 00:09:25.138 cpu : usr=2.96%, sys=3.16%, ctx=305, majf=0, minf=1 00:09:25.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:25.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.138 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.138 job2: (groupid=0, jobs=1): err= 0: pid=1107801: Tue Oct 15 12:49:45 2024 00:09:25.138 read: IOPS=6673, BW=26.1MiB/s (27.3MB/s)(26.3MiB/1009msec) 00:09:25.138 slat (nsec): min=1378, max=8957.5k, avg=79386.12, stdev=561692.15 00:09:25.138 clat (usec): min=3532, max=18309, avg=9887.70, stdev=2488.83 00:09:25.138 lat (usec): min=3538, max=18320, avg=9967.09, stdev=2523.03 00:09:25.138 clat percentiles (usec): 00:09:25.138 | 1.00th=[ 4178], 5.00th=[ 6915], 10.00th=[ 7570], 20.00th=[ 8586], 00:09:25.138 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:09:25.138 | 70.00th=[ 9634], 80.00th=[11863], 90.00th=[13960], 95.00th=[15401], 00:09:25.138 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:09:25.138 | 99.99th=[18220] 00:09:25.138 write: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:09:25.138 slat (usec): min=2, max=9222, avg=59.69, stdev=265.80 00:09:25.138 clat (usec): min=1195, max=19782, avg=8550.37, stdev=2167.96 00:09:25.138 lat (usec): min=1206, max=19794, avg=8610.06, stdev=2181.95 00:09:25.138 clat percentiles (usec): 00:09:25.138 | 1.00th=[ 2900], 5.00th=[ 4146], 10.00th=[ 5473], 20.00th=[ 7504], 00:09:25.138 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9241], 00:09:25.138 | 70.00th=[ 9372], 80.00th=[ 9372], 90.00th=[ 9503], 95.00th=[ 9503], 00:09:25.138 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:09:25.138 | 99.99th=[19792] 00:09:25.138 bw ( KiB/s): min=28280, max=28672, per=46.66%, avg=28476.00, stdev=277.19, samples=2 00:09:25.138 iops : min= 7070, max= 7168, avg=7119.00, stdev=69.30, samples=2 00:09:25.138 lat (msec) : 2=0.16%, 4=2.43%, 10=82.66%, 20=14.75% 00:09:25.138 cpu : usr=4.66%, sys=7.64%, ctx=888, majf=0, minf=1 00:09:25.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:25.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.138 issued rwts: total=6734,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.138 job3: (groupid=0, jobs=1): err= 0: pid=1107802: Tue Oct 15 12:49:45 2024 00:09:25.138 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:09:25.138 slat (nsec): min=1361, max=15932k, avg=147311.31, stdev=1045935.24 00:09:25.138 clat (usec): min=8950, max=38535, avg=17190.26, stdev=5636.23 00:09:25.138 lat (usec): min=8957, max=38545, avg=17337.57, stdev=5730.14 00:09:25.138 clat percentiles (usec): 00:09:25.138 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[10814], 20.00th=[11731], 00:09:25.138 | 30.00th=[13173], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:09:25.138 | 70.00th=[17957], 80.00th=[20579], 90.00th=[26084], 95.00th=[29230], 00:09:25.138 | 99.00th=[34341], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:09:25.138 | 99.99th=[38536] 00:09:25.138 write: IOPS=2648, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1009msec); 0 zone resets 00:09:25.138 slat (usec): min=3, max=15276, avg=228.72, stdev=1031.05 00:09:25.138 clat (usec): min=1576, max=110114, avg=31408.48, stdev=20341.01 00:09:25.138 lat (usec): min=1617, max=110127, avg=31637.20, stdev=20464.88 00:09:25.138 clat percentiles (msec): 00:09:25.138 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 15], 00:09:25.138 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:09:25.138 | 70.00th=[ 35], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 80], 00:09:25.138 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:09:25.138 | 99.99th=[ 111] 00:09:25.138 bw ( KiB/s): min= 9936, max=10544, per=16.78%, avg=10240.00, stdev=429.92, samples=2 00:09:25.138 iops : min= 2484, max= 2636, avg=2560.00, stdev=107.48, samples=2 00:09:25.138 lat (msec) : 2=0.02%, 4=0.13%, 10=6.31%, 20=45.07%, 50=39.91% 00:09:25.138 lat (msec) : 100=7.84%, 250=0.73% 00:09:25.138 cpu : usr=2.68%, sys=3.08%, ctx=307, majf=0, minf=2 00:09:25.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:25.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.138 issued rwts: total=2560,2672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.138 00:09:25.138 Run status group 0 (all jobs): 00:09:25.138 READ: bw=54.6MiB/s (57.3MB/s), 9109KiB/s-26.1MiB/s (9328kB/s-27.3MB/s), io=55.4MiB (58.1MB), run=1009-1014msec 00:09:25.138 WRITE: bw=59.6MiB/s (62.5MB/s), 9.90MiB/s-27.8MiB/s (10.4MB/s-29.1MB/s), io=60.4MiB (63.4MB), run=1009-1014msec 00:09:25.138 00:09:25.138 Disk stats (read/write): 00:09:25.138 nvme0n1: ios=2097/2135, merge=0/0, ticks=38638/63549, in_queue=102187, util=86.67% 00:09:25.138 nvme0n2: ios=2191/2560, merge=0/0, ticks=26632/62948, in_queue=89580, util=96.24% 00:09:25.138 nvme0n3: ios=5661/6135, merge=0/0, ticks=53750/50649, in_queue=104399, util=96.77% 00:09:25.138 nvme0n4: ios=2048/2207, merge=0/0, ticks=34929/68643, in_queue=103572, util=89.72% 00:09:25.138 12:49:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:25.138 [global] 00:09:25.138 thread=1 00:09:25.138 invalidate=1 00:09:25.138 rw=randwrite 00:09:25.138 time_based=1 00:09:25.138 runtime=1 00:09:25.138 ioengine=libaio 00:09:25.138 direct=1 00:09:25.138 bs=4096 00:09:25.138 iodepth=128 00:09:25.138 norandommap=0 00:09:25.138 numjobs=1 00:09:25.138 00:09:25.138 verify_dump=1 00:09:25.138 verify_backlog=512 00:09:25.138 verify_state_save=0 00:09:25.138 do_verify=1 00:09:25.138 verify=crc32c-intel 00:09:25.138 [job0] 00:09:25.138 filename=/dev/nvme0n1 00:09:25.138 [job1] 00:09:25.138 filename=/dev/nvme0n2 00:09:25.138 [job2] 00:09:25.138 filename=/dev/nvme0n3 00:09:25.138 [job3] 00:09:25.138 filename=/dev/nvme0n4 00:09:25.138 Could not set queue depth (nvme0n1) 00:09:25.138 Could not set queue depth (nvme0n2) 00:09:25.138 Could not set queue depth (nvme0n3) 00:09:25.138 Could not set queue depth (nvme0n4) 00:09:25.395 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.395 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.395 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.395 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.395 fio-3.35 00:09:25.395 Starting 4 threads 00:09:26.767 00:09:26.767 job0: (groupid=0, jobs=1): err= 0: pid=1108174: Tue Oct 15 12:49:46 2024 00:09:26.767 read: IOPS=3967, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1007msec) 00:09:26.767 slat (nsec): min=1115, max=19961k, avg=126926.74, stdev=944771.46 00:09:26.767 clat (usec): min=3448, max=66978, avg=14574.87, stdev=8671.61 00:09:26.767 lat (usec): min=3453, max=66987, avg=14701.79, stdev=8770.47 00:09:26.767 clat percentiles (usec): 00:09:26.767 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[ 9896], 00:09:26.767 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11469], 60.00th=[12911], 00:09:26.767 | 70.00th=[14222], 80.00th=[15795], 90.00th=[20841], 95.00th=[38536], 00:09:26.767 | 99.00th=[45876], 99.50th=[54264], 99.90th=[66847], 99.95th=[66847], 00:09:26.767 | 99.99th=[66847] 00:09:26.767 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:26.767 slat (nsec): min=1824, max=12827k, avg=111972.52, stdev=563059.74 00:09:26.767 clat (usec): min=825, max=73327, avg=16948.53, stdev=14768.23 00:09:26.767 lat (usec): min=833, max=73337, avg=17060.50, stdev=14842.50 00:09:26.767 clat percentiles (usec): 00:09:26.767 | 1.00th=[ 1876], 5.00th=[ 3884], 10.00th=[ 5735], 20.00th=[ 8848], 00:09:26.767 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[12780], 00:09:26.767 | 70.00th=[17433], 80.00th=[21103], 90.00th=[33817], 95.00th=[55313], 00:09:26.767 | 99.00th=[71828], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:09:26.767 | 99.99th=[72877] 00:09:26.767 bw ( KiB/s): min=11272, max=21496, per=23.93%, avg=16384.00, stdev=7229.46, samples=2 00:09:26.767 iops : min= 2818, max= 5374, avg=4096.00, stdev=1807.36, samples=2 00:09:26.767 lat (usec) : 1000=0.17% 00:09:26.767 lat (msec) : 2=0.49%, 4=2.21%, 10=22.36%, 20=58.24%, 50=12.61% 00:09:26.767 lat (msec) : 100=3.92% 00:09:26.767 cpu : usr=2.49%, sys=2.98%, ctx=517, majf=0, minf=1 00:09:26.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:26.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.767 issued rwts: total=3995,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.767 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.767 job1: (groupid=0, jobs=1): err= 0: pid=1108175: Tue Oct 15 12:49:46 2024 00:09:26.767 read: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:09:26.767 slat (nsec): min=1194, max=23199k, avg=116556.19, stdev=935982.29 00:09:26.767 clat (usec): min=979, max=68704, avg=14989.81, stdev=9872.74 00:09:26.767 lat (usec): min=999, max=68713, avg=15106.36, stdev=9954.03 00:09:26.767 clat percentiles (usec): 00:09:26.767 | 1.00th=[ 5014], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 8979], 00:09:26.767 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11469], 60.00th=[11863], 00:09:26.767 | 70.00th=[14746], 80.00th=[20579], 90.00th=[26346], 95.00th=[33424], 00:09:26.767 | 99.00th=[58983], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:09:26.767 | 99.99th=[68682] 00:09:26.767 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:26.767 slat (nsec): min=1834, max=11823k, avg=87614.22, stdev=470673.01 00:09:26.767 clat (usec): min=709, max=46215, avg=12808.65, stdev=6011.58 00:09:26.767 lat (usec): min=738, max=46223, avg=12896.26, stdev=6052.69 00:09:26.767 clat percentiles (usec): 00:09:26.767 | 1.00th=[ 4359], 5.00th=[ 6128], 10.00th=[ 7767], 20.00th=[ 9110], 00:09:26.767 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:09:26.767 | 70.00th=[12387], 80.00th=[17433], 90.00th=[21890], 95.00th=[22152], 00:09:26.767 | 99.00th=[34341], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:09:26.767 | 99.99th=[46400] 00:09:26.767 bw ( KiB/s): min=16384, max=16384, per=23.93%, avg=16384.00, stdev= 0.00, samples=1 00:09:26.767 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:26.767 lat (usec) : 750=0.02%, 1000=0.01% 00:09:26.767 lat (msec) : 2=0.15%, 4=0.61%, 10=29.86%, 20=51.45%, 50=16.95% 00:09:26.767 lat (msec) : 100=0.94% 00:09:26.767 cpu : usr=3.10%, sys=4.90%, ctx=486, majf=0, minf=1 00:09:26.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:26.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.767 issued rwts: total=4508,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.767 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.767 job2: (groupid=0, jobs=1): err= 0: pid=1108176: Tue Oct 15 12:49:46 2024 00:09:26.767 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:09:26.767 slat (nsec): min=1732, max=15073k, avg=126380.51, stdev=882693.94 00:09:26.768 clat (usec): min=6165, max=41033, avg=15541.63, stdev=5205.86 00:09:26.768 lat (usec): min=6184, max=41042, avg=15668.01, stdev=5261.76 00:09:26.768 clat percentiles (usec): 00:09:26.768 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[11863], 20.00th=[12256], 00:09:26.768 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14746], 60.00th=[15664], 00:09:26.768 | 70.00th=[16057], 80.00th=[16909], 90.00th=[21627], 95.00th=[26870], 00:09:26.768 | 99.00th=[35914], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:09:26.768 | 99.99th=[41157] 00:09:26.768 write: IOPS=3472, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1013msec); 0 zone resets 00:09:26.768 slat (usec): min=2, max=13414, avg=149.40, stdev=723.89 00:09:26.768 clat (usec): min=1367, max=106686, avg=22952.84, stdev=16558.64 00:09:26.768 lat (usec): min=1377, max=106697, avg=23102.24, stdev=16639.92 00:09:26.768 clat percentiles (msec): 00:09:26.768 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 12], 00:09:26.768 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 22], 60.00th=[ 23], 00:09:26.768 | 70.00th=[ 26], 80.00th=[ 30], 90.00th=[ 37], 95.00th=[ 55], 00:09:26.768 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 00:09:26.768 | 99.99th=[ 107] 00:09:26.768 bw ( KiB/s): min=12864, max=14256, per=19.80%, avg=13560.00, stdev=984.29, samples=2 00:09:26.768 iops : min= 3216, max= 3564, avg=3390.00, stdev=246.07, samples=2 00:09:26.768 lat (msec) : 2=0.17%, 4=1.79%, 10=9.76%, 20=54.73%, 50=30.47% 00:09:26.768 lat (msec) : 100=2.50%, 250=0.58% 00:09:26.768 cpu : usr=3.56%, sys=3.85%, ctx=417, majf=0, minf=2 00:09:26.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:26.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.768 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.768 job3: (groupid=0, jobs=1): err= 0: pid=1108177: Tue Oct 15 12:49:46 2024 00:09:26.768 read: IOPS=4760, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1004msec) 00:09:26.768 slat (nsec): min=1229, max=20788k, avg=107034.43, stdev=828306.87 00:09:26.768 clat (usec): min=2246, max=42784, avg=13311.59, stdev=4543.07 00:09:26.768 lat (usec): min=2474, max=42811, avg=13418.62, stdev=4603.75 00:09:26.768 clat percentiles (usec): 00:09:26.768 | 1.00th=[ 3982], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[10552], 00:09:26.768 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12649], 00:09:26.768 | 70.00th=[14222], 80.00th=[16909], 90.00th=[20317], 95.00th=[22152], 00:09:26.768 | 99.00th=[29754], 99.50th=[29754], 99.90th=[29754], 99.95th=[34341], 00:09:26.768 | 99.99th=[42730] 00:09:26.768 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:26.768 slat (usec): min=2, max=10825, avg=88.88, stdev=524.85 00:09:26.768 clat (usec): min=1399, max=32419, avg=12430.96, stdev=5005.16 00:09:26.768 lat (usec): min=1409, max=32431, avg=12519.84, stdev=5044.64 00:09:26.768 clat percentiles (usec): 00:09:26.768 | 1.00th=[ 2802], 5.00th=[ 6128], 10.00th=[ 8291], 20.00th=[ 9241], 00:09:26.768 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:09:26.768 | 70.00th=[12387], 80.00th=[15270], 90.00th=[19792], 95.00th=[23987], 00:09:26.768 | 99.00th=[28443], 99.50th=[29754], 99.90th=[31851], 99.95th=[32113], 00:09:26.768 | 99.99th=[32375] 00:09:26.768 bw ( KiB/s): min=20480, max=20480, per=29.91%, avg=20480.00, stdev= 0.00, samples=2 00:09:26.768 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:26.768 lat (msec) : 2=0.14%, 4=1.77%, 10=21.10%, 20=66.62%, 50=10.37% 00:09:26.768 cpu : usr=4.39%, sys=4.69%, ctx=513, majf=0, minf=1 00:09:26.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:26.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.768 issued rwts: total=4780,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.768 00:09:26.768 Run status group 0 (all jobs): 00:09:26.768 READ: bw=63.1MiB/s (66.1MB/s), 11.8MiB/s-18.6MiB/s (12.4MB/s-19.5MB/s), io=63.9MiB (67.0MB), run=1002-1013msec 00:09:26.768 WRITE: bw=66.9MiB/s (70.1MB/s), 13.6MiB/s-19.9MiB/s (14.2MB/s-20.9MB/s), io=67.7MiB (71.0MB), run=1002-1013msec 00:09:26.768 00:09:26.768 Disk stats (read/write): 00:09:26.768 nvme0n1: ios=3634/3799, merge=0/0, ticks=34116/32793, in_queue=66909, util=86.87% 00:09:26.768 nvme0n2: ios=3606/3631, merge=0/0, ticks=34214/28660, in_queue=62874, util=98.98% 00:09:26.768 nvme0n3: ios=2662/3072, merge=0/0, ticks=40617/65406, in_queue=106023, util=98.34% 00:09:26.768 nvme0n4: ios=4134/4104, merge=0/0, ticks=47160/47612, in_queue=94772, util=96.02% 00:09:26.768 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:26.768 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1108407 00:09:26.768 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:26.768 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:26.768 [global] 00:09:26.768 thread=1 00:09:26.768 invalidate=1 00:09:26.768 rw=read 00:09:26.768 time_based=1 00:09:26.768 runtime=10 00:09:26.768 ioengine=libaio 00:09:26.768 direct=1 00:09:26.768 bs=4096 00:09:26.768 iodepth=1 00:09:26.768 norandommap=1 00:09:26.768 numjobs=1 00:09:26.768 00:09:26.768 [job0] 00:09:26.768 filename=/dev/nvme0n1 00:09:26.768 [job1] 00:09:26.768 filename=/dev/nvme0n2 00:09:26.768 [job2] 00:09:26.768 filename=/dev/nvme0n3 00:09:26.768 [job3] 00:09:26.768 filename=/dev/nvme0n4 00:09:26.768 Could not set queue depth (nvme0n1) 00:09:26.768 Could not set queue depth (nvme0n2) 00:09:26.768 Could not set queue depth (nvme0n3) 00:09:26.768 Could not set queue depth (nvme0n4) 00:09:26.768 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.768 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.768 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.768 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.768 fio-3.35 00:09:26.768 Starting 4 threads 00:09:30.044 12:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:30.044 12:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:30.044 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3022848, buflen=4096 00:09:30.044 fio: pid=1108595, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.044 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.044 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:30.045 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42848256, buflen=4096 00:09:30.045 fio: pid=1108589, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.303 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.303 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:30.303 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44695552, buflen=4096 00:09:30.303 fio: pid=1108563, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.303 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=692224, buflen=4096 00:09:30.303 fio: pid=1108569, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.303 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.303 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:30.303 00:09:30.303 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1108563: Tue Oct 15 12:49:50 2024 00:09:30.303 read: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(42.6MiB/3185msec) 00:09:30.303 slat (usec): min=4, max=26884, avg= 9.64, stdev=257.29 00:09:30.303 clat (usec): min=169, max=41969, avg=279.29, stdev=1034.57 00:09:30.303 lat (usec): min=176, max=41994, avg=288.92, stdev=1066.79 00:09:30.303 clat percentiles (usec): 00:09:30.303 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:09:30.303 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:09:30.303 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:09:30.303 | 99.00th=[ 412], 99.50th=[ 449], 99.90th=[ 502], 99.95th=[41157], 00:09:30.303 | 99.99th=[41157] 00:09:30.303 bw ( KiB/s): min= 8849, max=15512, per=53.96%, avg=14257.50, stdev=2655.90, samples=6 00:09:30.303 iops : min= 2212, max= 3878, avg=3564.33, stdev=664.08, samples=6 00:09:30.303 lat (usec) : 250=50.37%, 500=49.52%, 750=0.01%, 1000=0.01% 00:09:30.303 lat (msec) : 2=0.01%, 4=0.01%, 50=0.06% 00:09:30.303 cpu : usr=0.63%, sys=3.23%, ctx=10915, majf=0, minf=1 00:09:30.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 issued rwts: total=10913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.303 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1108569: Tue Oct 15 12:49:50 2024 00:09:30.303 read: IOPS=50, BW=200KiB/s (205kB/s)(676KiB/3373msec) 00:09:30.303 slat (usec): min=7, max=10720, avg=77.45, stdev=821.18 00:09:30.303 clat (usec): min=194, max=41957, avg=19748.52, stdev=20294.63 00:09:30.303 lat (usec): min=202, max=51977, avg=19826.29, stdev=20381.51 00:09:30.303 clat percentiles (usec): 00:09:30.303 | 1.00th=[ 196], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 245], 00:09:30.303 | 30.00th=[ 255], 40.00th=[ 285], 50.00th=[ 465], 60.00th=[40633], 00:09:30.303 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:30.303 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:30.303 | 99.99th=[42206] 00:09:30.303 bw ( KiB/s): min= 96, max= 408, per=0.79%, avg=209.83, stdev=120.42, samples=6 00:09:30.303 iops : min= 24, max= 102, avg=52.33, stdev=30.18, samples=6 00:09:30.303 lat (usec) : 250=25.29%, 500=25.29% 00:09:30.303 lat (msec) : 4=0.59%, 10=0.59%, 50=47.65% 00:09:30.303 cpu : usr=0.03%, sys=0.09%, ctx=172, majf=0, minf=2 00:09:30.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.303 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1108589: Tue Oct 15 12:49:50 2024 00:09:30.303 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(40.9MiB/2951msec) 00:09:30.303 slat (nsec): min=6915, max=41918, avg=8175.34, stdev=1523.85 00:09:30.303 clat (usec): min=189, max=40919, avg=269.82, stdev=795.52 00:09:30.303 lat (usec): min=197, max=40932, avg=278.00, stdev=795.74 00:09:30.303 clat percentiles (usec): 00:09:30.303 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:09:30.303 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:30.303 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:09:30.303 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 519], 99.95th=[ 1516], 00:09:30.303 | 99.99th=[41157] 00:09:30.303 bw ( KiB/s): min=14496, max=15432, per=57.36%, avg=15155.20, stdev=393.21, samples=5 00:09:30.303 iops : min= 3624, max= 3858, avg=3788.80, stdev=98.30, samples=5 00:09:30.303 lat (usec) : 250=53.43%, 500=46.12%, 750=0.36%, 1000=0.01% 00:09:30.303 lat (msec) : 2=0.03%, 50=0.04% 00:09:30.303 cpu : usr=2.17%, sys=5.46%, ctx=10463, majf=0, minf=2 00:09:30.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 issued rwts: total=10462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.303 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1108595: Tue Oct 15 12:49:50 2024 00:09:30.303 read: IOPS=270, BW=1079KiB/s (1105kB/s)(2952KiB/2736msec) 00:09:30.303 slat (nsec): min=7228, max=58917, avg=10017.22, stdev=4420.69 00:09:30.303 clat (usec): min=196, max=42025, avg=3665.88, stdev=11315.66 00:09:30.303 lat (usec): min=205, max=42048, avg=3675.90, stdev=11318.74 00:09:30.303 clat percentiles (usec): 00:09:30.303 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 227], 00:09:30.303 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:09:30.303 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 293], 95.00th=[41157], 00:09:30.303 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:30.303 | 99.99th=[42206] 00:09:30.303 bw ( KiB/s): min= 96, max= 3504, per=4.43%, avg=1171.20, stdev=1552.80, samples=5 00:09:30.303 iops : min= 24, max= 876, avg=292.80, stdev=388.20, samples=5 00:09:30.303 lat (usec) : 250=67.93%, 500=23.55% 00:09:30.303 lat (msec) : 50=8.39% 00:09:30.303 cpu : usr=0.04%, sys=0.37%, ctx=740, majf=0, minf=2 00:09:30.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.303 issued rwts: total=739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.303 00:09:30.303 Run status group 0 (all jobs): 00:09:30.303 READ: bw=25.8MiB/s (27.1MB/s), 200KiB/s-13.8MiB/s (205kB/s-14.5MB/s), io=87.0MiB (91.3MB), run=2736-3373msec 00:09:30.303 00:09:30.303 Disk stats (read/write): 00:09:30.303 nvme0n1: ios=10910/0, merge=0/0, ticks=2937/0, in_queue=2937, util=94.85% 00:09:30.303 nvme0n2: ios=169/0, merge=0/0, ticks=3339/0, in_queue=3339, util=96.07% 00:09:30.303 nvme0n3: ios=10459/0, merge=0/0, ticks=2619/0, in_queue=2619, util=96.52% 00:09:30.303 nvme0n4: ios=735/0, merge=0/0, ticks=2578/0, in_queue=2578, util=96.44% 00:09:30.561 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.561 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:30.818 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.818 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:31.075 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.075 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1108407 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:31.335 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:31.592 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:31.592 nvmf hotplug test: fio failed as expected 00:09:31.593 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.850 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.850 rmmod nvme_tcp 00:09:31.850 rmmod nvme_fabrics 00:09:31.850 rmmod nvme_keyring 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1105695 ']' 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1105695 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1105695 ']' 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1105695 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1105695 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1105695' 00:09:31.850 killing process with pid 1105695 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1105695 00:09:31.850 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1105695 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.109 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.015 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.015 00:09:34.015 real 0m26.927s 00:09:34.015 user 1m46.771s 00:09:34.015 sys 0m8.548s 00:09:34.015 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.015 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.015 ************************************ 00:09:34.015 END TEST nvmf_fio_target 00:09:34.015 ************************************ 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.275 ************************************ 00:09:34.275 START TEST nvmf_bdevio 00:09:34.275 ************************************ 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:34.275 * Looking for test storage... 00:09:34.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.275 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.275 --rc genhtml_branch_coverage=1 00:09:34.275 --rc genhtml_function_coverage=1 00:09:34.275 --rc genhtml_legend=1 00:09:34.275 --rc geninfo_all_blocks=1 00:09:34.276 --rc geninfo_unexecuted_blocks=1 00:09:34.276 00:09:34.276 ' 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.276 --rc genhtml_branch_coverage=1 00:09:34.276 --rc genhtml_function_coverage=1 00:09:34.276 --rc genhtml_legend=1 00:09:34.276 --rc geninfo_all_blocks=1 00:09:34.276 --rc geninfo_unexecuted_blocks=1 00:09:34.276 00:09:34.276 ' 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.276 --rc genhtml_branch_coverage=1 00:09:34.276 --rc genhtml_function_coverage=1 00:09:34.276 --rc genhtml_legend=1 00:09:34.276 --rc geninfo_all_blocks=1 00:09:34.276 --rc geninfo_unexecuted_blocks=1 00:09:34.276 00:09:34.276 ' 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.276 --rc genhtml_branch_coverage=1 00:09:34.276 --rc genhtml_function_coverage=1 00:09:34.276 --rc genhtml_legend=1 00:09:34.276 --rc geninfo_all_blocks=1 00:09:34.276 --rc geninfo_unexecuted_blocks=1 00:09:34.276 00:09:34.276 ' 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.276 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.536 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.537 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:41.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:41.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:41.112 Found net devices under 0000:86:00.0: cvl_0_0 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:41.112 Found net devices under 0000:86:00.1: cvl_0_1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.112 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:09:41.112 00:09:41.112 --- 10.0.0.2 ping statistics --- 00:09:41.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.113 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:41.113 00:09:41.113 --- 10.0.0.1 ping statistics --- 00:09:41.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.113 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1113030 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1113030 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1113030 ']' 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 [2024-10-15 12:50:00.692560] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:09:41.113 [2024-10-15 12:50:00.692618] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.113 [2024-10-15 12:50:00.764444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.113 [2024-10-15 12:50:00.807077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.113 [2024-10-15 12:50:00.807114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.113 [2024-10-15 12:50:00.807121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.113 [2024-10-15 12:50:00.807126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.113 [2024-10-15 12:50:00.807131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.113 [2024-10-15 12:50:00.808761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.113 [2024-10-15 12:50:00.808871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.113 [2024-10-15 12:50:00.808979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.113 [2024-10-15 12:50:00.808979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 [2024-10-15 12:50:00.945358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 Malloc0 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.113 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.113 [2024-10-15 12:50:01.005731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:41.113 { 00:09:41.113 "params": { 00:09:41.113 "name": "Nvme$subsystem", 00:09:41.113 "trtype": "$TEST_TRANSPORT", 00:09:41.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.113 "adrfam": "ipv4", 00:09:41.113 "trsvcid": "$NVMF_PORT", 00:09:41.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.113 "hdgst": ${hdgst:-false}, 00:09:41.113 "ddgst": ${ddgst:-false} 00:09:41.113 }, 00:09:41.113 "method": "bdev_nvme_attach_controller" 00:09:41.113 } 00:09:41.113 EOF 00:09:41.113 )") 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:41.113 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:41.113 "params": { 00:09:41.113 "name": "Nvme1", 00:09:41.113 "trtype": "tcp", 00:09:41.113 "traddr": "10.0.0.2", 00:09:41.113 "adrfam": "ipv4", 00:09:41.113 "trsvcid": "4420", 00:09:41.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.113 "hdgst": false, 00:09:41.113 "ddgst": false 00:09:41.113 }, 00:09:41.113 "method": "bdev_nvme_attach_controller" 00:09:41.113 }' 00:09:41.113 [2024-10-15 12:50:01.055152] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:09:41.113 [2024-10-15 12:50:01.055190] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113053 ] 00:09:41.113 [2024-10-15 12:50:01.122955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.113 [2024-10-15 12:50:01.166487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.113 [2024-10-15 12:50:01.166593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.113 [2024-10-15 12:50:01.166593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.371 I/O targets: 00:09:41.371 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:41.371 00:09:41.371 00:09:41.371 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.371 http://cunit.sourceforge.net/ 00:09:41.371 00:09:41.371 00:09:41.371 Suite: bdevio tests on: Nvme1n1 00:09:41.371 Test: blockdev write read block ...passed 00:09:41.371 Test: blockdev write zeroes read block ...passed 00:09:41.371 Test: blockdev write zeroes read no split ...passed 00:09:41.371 Test: blockdev write zeroes read split ...passed 00:09:41.371 Test: blockdev write zeroes read split partial ...passed 00:09:41.371 Test: blockdev reset ...[2024-10-15 12:50:01.558534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:41.371 [2024-10-15 12:50:01.558596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186c400 (9): Bad file descriptor 00:09:41.371 [2024-10-15 12:50:01.653706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:41.371 passed 00:09:41.371 Test: blockdev write read 8 blocks ...passed 00:09:41.371 Test: blockdev write read size > 128k ...passed 00:09:41.371 Test: blockdev write read invalid size ...passed 00:09:41.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.628 Test: blockdev write read max offset ...passed 00:09:41.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.628 Test: blockdev writev readv 8 blocks ...passed 00:09:41.628 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.628 Test: blockdev writev readv block ...passed 00:09:41.628 Test: blockdev writev readv size > 128k ...passed 00:09:41.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.628 Test: blockdev comparev and writev ...[2024-10-15 12:50:01.905502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.905530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.905552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.905800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.905812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.905823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.905830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.906062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.906080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.906303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.906315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:41.628 [2024-10-15 12:50:01.906327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:41.628 [2024-10-15 12:50:01.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:41.628 passed 00:09:41.887 Test: blockdev nvme passthru rw ...passed 00:09:41.887 Test: blockdev nvme passthru vendor specific ...[2024-10-15 12:50:01.989004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:41.887 [2024-10-15 12:50:01.989030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:41.887 [2024-10-15 12:50:01.989141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:41.887 [2024-10-15 12:50:01.989153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:41.887 [2024-10-15 12:50:01.989253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:41.887 [2024-10-15 12:50:01.989263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:41.887 [2024-10-15 12:50:01.989369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:41.887 [2024-10-15 12:50:01.989379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:41.887 passed 00:09:41.887 Test: blockdev nvme admin passthru ...passed 00:09:41.887 Test: blockdev copy ...passed 00:09:41.887 00:09:41.887 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.887 suites 1 1 n/a 0 0 00:09:41.887 tests 23 23 23 0 0 00:09:41.887 asserts 152 152 152 0 n/a 00:09:41.887 00:09:41.887 Elapsed time = 1.199 seconds 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.887 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.887 rmmod nvme_tcp 00:09:42.146 rmmod nvme_fabrics 00:09:42.146 rmmod nvme_keyring 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1113030 ']' 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1113030 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1113030 ']' 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1113030 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113030 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113030' 00:09:42.146 killing process with pid 1113030 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1113030 00:09:42.146 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1113030 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.405 12:50:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.311 00:09:44.311 real 0m10.159s 00:09:44.311 user 0m10.788s 00:09:44.311 sys 0m5.021s 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.311 ************************************ 00:09:44.311 END TEST nvmf_bdevio 00:09:44.311 ************************************ 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.311 00:09:44.311 real 4m36.274s 00:09:44.311 user 10m22.057s 00:09:44.311 sys 1m38.139s 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.311 12:50:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.311 ************************************ 00:09:44.311 END TEST nvmf_target_core 00:09:44.311 ************************************ 00:09:44.571 12:50:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.571 12:50:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.571 12:50:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.571 12:50:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.571 ************************************ 00:09:44.571 START TEST nvmf_target_extra 00:09:44.571 ************************************ 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.571 * Looking for test storage... 00:09:44.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.571 --rc genhtml_branch_coverage=1 00:09:44.571 --rc genhtml_function_coverage=1 00:09:44.571 --rc genhtml_legend=1 00:09:44.571 --rc geninfo_all_blocks=1 00:09:44.571 --rc geninfo_unexecuted_blocks=1 00:09:44.571 00:09:44.571 ' 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.571 --rc genhtml_branch_coverage=1 00:09:44.571 --rc genhtml_function_coverage=1 00:09:44.571 --rc genhtml_legend=1 00:09:44.571 --rc geninfo_all_blocks=1 00:09:44.571 --rc geninfo_unexecuted_blocks=1 00:09:44.571 00:09:44.571 ' 00:09:44.571 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.572 --rc genhtml_branch_coverage=1 00:09:44.572 --rc genhtml_function_coverage=1 00:09:44.572 --rc genhtml_legend=1 00:09:44.572 --rc geninfo_all_blocks=1 00:09:44.572 --rc geninfo_unexecuted_blocks=1 00:09:44.572 00:09:44.572 ' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:44.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.572 --rc genhtml_branch_coverage=1 00:09:44.572 --rc genhtml_function_coverage=1 00:09:44.572 --rc genhtml_legend=1 00:09:44.572 --rc geninfo_all_blocks=1 00:09:44.572 --rc geninfo_unexecuted_blocks=1 00:09:44.572 00:09:44.572 ' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.572 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.832 ************************************ 00:09:44.832 START TEST nvmf_example 00:09:44.832 ************************************ 00:09:44.832 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:44.832 * Looking for test storage... 00:09:44.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.832 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:44.832 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:44.832 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.832 --rc genhtml_branch_coverage=1 00:09:44.832 --rc genhtml_function_coverage=1 00:09:44.832 --rc genhtml_legend=1 00:09:44.832 --rc geninfo_all_blocks=1 00:09:44.832 --rc geninfo_unexecuted_blocks=1 00:09:44.832 00:09:44.832 ' 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.832 --rc genhtml_branch_coverage=1 00:09:44.832 --rc genhtml_function_coverage=1 00:09:44.832 --rc genhtml_legend=1 00:09:44.832 --rc geninfo_all_blocks=1 00:09:44.832 --rc geninfo_unexecuted_blocks=1 00:09:44.832 00:09:44.832 ' 00:09:44.832 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.832 --rc genhtml_branch_coverage=1 00:09:44.833 --rc genhtml_function_coverage=1 00:09:44.833 --rc genhtml_legend=1 00:09:44.833 --rc geninfo_all_blocks=1 00:09:44.833 --rc geninfo_unexecuted_blocks=1 00:09:44.833 00:09:44.833 ' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.833 --rc genhtml_branch_coverage=1 00:09:44.833 --rc genhtml_function_coverage=1 00:09:44.833 --rc genhtml_legend=1 00:09:44.833 --rc geninfo_all_blocks=1 00:09:44.833 --rc geninfo_unexecuted_blocks=1 00:09:44.833 00:09:44.833 ' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.833 12:50:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.408 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.408 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:51.408 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.409 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:09:51.409 00:09:51.409 --- 10.0.0.2 ping statistics --- 00:09:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.409 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:51.409 00:09:51.409 --- 10.0.0.1 ping statistics --- 00:09:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.409 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1116875 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1116875 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1116875 ']' 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.409 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.975 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.975 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:51.975 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:51.975 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.975 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:51.976 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:04.177 Initializing NVMe Controllers 00:10:04.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:04.177 Initialization complete. Launching workers. 00:10:04.177 ======================================================== 00:10:04.177 Latency(us) 00:10:04.177 Device Information : IOPS MiB/s Average min max 00:10:04.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18514.71 72.32 3456.15 470.07 15419.77 00:10:04.177 ======================================================== 00:10:04.177 Total : 18514.71 72.32 3456.15 470.07 15419.77 00:10:04.177 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.177 rmmod nvme_tcp 00:10:04.177 rmmod nvme_fabrics 00:10:04.177 rmmod nvme_keyring 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1116875 ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1116875 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1116875 ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1116875 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1116875 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1116875' 00:10:04.177 killing process with pid 1116875 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1116875 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1116875 00:10:04.177 nvmf threads initialize successfully 00:10:04.177 bdev subsystem init successfully 00:10:04.177 created a nvmf target service 00:10:04.177 create targets's poll groups done 00:10:04.177 all subsystems of target started 00:10:04.177 nvmf target is running 00:10:04.177 all subsystems of target stopped 00:10:04.177 destroy targets's poll groups done 00:10:04.177 destroyed the nvmf target service 00:10:04.177 bdev subsystem finish successfully 00:10:04.177 nvmf threads destroy successfully 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.177 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.746 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.746 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:04.746 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.746 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.746 00:10:04.746 real 0m19.896s 00:10:04.746 user 0m46.247s 00:10:04.746 sys 0m6.131s 00:10:04.746 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.747 ************************************ 00:10:04.747 END TEST nvmf_example 00:10:04.747 ************************************ 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.747 ************************************ 00:10:04.747 START TEST nvmf_filesystem 00:10:04.747 ************************************ 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.747 * Looking for test storage... 00:10:04.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.747 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.747 --rc genhtml_branch_coverage=1 00:10:04.747 --rc genhtml_function_coverage=1 00:10:04.747 --rc genhtml_legend=1 00:10:04.747 --rc geninfo_all_blocks=1 00:10:04.747 --rc geninfo_unexecuted_blocks=1 00:10:04.747 00:10:04.747 ' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.747 --rc genhtml_branch_coverage=1 00:10:04.747 --rc genhtml_function_coverage=1 00:10:04.747 --rc genhtml_legend=1 00:10:04.747 --rc geninfo_all_blocks=1 00:10:04.747 --rc geninfo_unexecuted_blocks=1 00:10:04.747 00:10:04.747 ' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.747 --rc genhtml_branch_coverage=1 00:10:04.747 --rc genhtml_function_coverage=1 00:10:04.747 --rc genhtml_legend=1 00:10:04.747 --rc geninfo_all_blocks=1 00:10:04.747 --rc geninfo_unexecuted_blocks=1 00:10:04.747 00:10:04.747 ' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.747 --rc genhtml_branch_coverage=1 00:10:04.747 --rc genhtml_function_coverage=1 00:10:04.747 --rc genhtml_legend=1 00:10:04.747 --rc geninfo_all_blocks=1 00:10:04.747 --rc geninfo_unexecuted_blocks=1 00:10:04.747 00:10:04.747 ' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:04.747 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:04.748 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:05.010 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:05.011 #define SPDK_CONFIG_H 00:10:05.011 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:05.011 #define SPDK_CONFIG_APPS 1 00:10:05.011 #define SPDK_CONFIG_ARCH native 00:10:05.011 #undef SPDK_CONFIG_ASAN 00:10:05.011 #undef SPDK_CONFIG_AVAHI 00:10:05.011 #undef SPDK_CONFIG_CET 00:10:05.011 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:05.011 #define SPDK_CONFIG_COVERAGE 1 00:10:05.011 #define SPDK_CONFIG_CROSS_PREFIX 00:10:05.011 #undef SPDK_CONFIG_CRYPTO 00:10:05.011 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:05.011 #undef SPDK_CONFIG_CUSTOMOCF 00:10:05.011 #undef SPDK_CONFIG_DAOS 00:10:05.011 #define SPDK_CONFIG_DAOS_DIR 00:10:05.011 #define SPDK_CONFIG_DEBUG 1 00:10:05.011 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:05.011 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:05.011 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:05.011 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:05.011 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:05.011 #undef SPDK_CONFIG_DPDK_UADK 00:10:05.011 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:05.011 #define SPDK_CONFIG_EXAMPLES 1 00:10:05.011 #undef SPDK_CONFIG_FC 00:10:05.011 #define SPDK_CONFIG_FC_PATH 00:10:05.011 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:05.011 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:05.011 #define SPDK_CONFIG_FSDEV 1 00:10:05.011 #undef SPDK_CONFIG_FUSE 00:10:05.011 #undef SPDK_CONFIG_FUZZER 00:10:05.011 #define SPDK_CONFIG_FUZZER_LIB 00:10:05.011 #undef SPDK_CONFIG_GOLANG 00:10:05.011 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:05.011 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:05.011 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:05.011 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:05.011 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:05.011 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:05.011 #undef SPDK_CONFIG_HAVE_LZ4 00:10:05.011 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:05.011 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:05.011 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:05.011 #define SPDK_CONFIG_IDXD 1 00:10:05.011 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:05.011 #undef SPDK_CONFIG_IPSEC_MB 00:10:05.011 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:05.011 #define SPDK_CONFIG_ISAL 1 00:10:05.011 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:05.011 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:05.011 #define SPDK_CONFIG_LIBDIR 00:10:05.011 #undef SPDK_CONFIG_LTO 00:10:05.011 #define SPDK_CONFIG_MAX_LCORES 128 00:10:05.011 #define SPDK_CONFIG_NVME_CUSE 1 00:10:05.011 #undef SPDK_CONFIG_OCF 00:10:05.011 #define SPDK_CONFIG_OCF_PATH 00:10:05.011 #define SPDK_CONFIG_OPENSSL_PATH 00:10:05.011 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:05.011 #define SPDK_CONFIG_PGO_DIR 00:10:05.011 #undef SPDK_CONFIG_PGO_USE 00:10:05.011 #define SPDK_CONFIG_PREFIX /usr/local 00:10:05.011 #undef SPDK_CONFIG_RAID5F 00:10:05.011 #undef SPDK_CONFIG_RBD 00:10:05.011 #define SPDK_CONFIG_RDMA 1 00:10:05.011 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:05.011 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:05.011 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:05.011 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:05.011 #define SPDK_CONFIG_SHARED 1 00:10:05.011 #undef SPDK_CONFIG_SMA 00:10:05.011 #define SPDK_CONFIG_TESTS 1 00:10:05.011 #undef SPDK_CONFIG_TSAN 00:10:05.011 #define SPDK_CONFIG_UBLK 1 00:10:05.011 #define SPDK_CONFIG_UBSAN 1 00:10:05.011 #undef SPDK_CONFIG_UNIT_TESTS 00:10:05.011 #undef SPDK_CONFIG_URING 00:10:05.011 #define SPDK_CONFIG_URING_PATH 00:10:05.011 #undef SPDK_CONFIG_URING_ZNS 00:10:05.011 #undef SPDK_CONFIG_USDT 00:10:05.011 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:05.011 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:05.011 #define SPDK_CONFIG_VFIO_USER 1 00:10:05.011 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:05.011 #define SPDK_CONFIG_VHOST 1 00:10:05.011 #define SPDK_CONFIG_VIRTIO 1 00:10:05.011 #undef SPDK_CONFIG_VTUNE 00:10:05.011 #define SPDK_CONFIG_VTUNE_DIR 00:10:05.011 #define SPDK_CONFIG_WERROR 1 00:10:05.011 #define SPDK_CONFIG_WPDK_DIR 00:10:05.011 #undef SPDK_CONFIG_XNVME 00:10:05.011 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:05.011 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:05.012 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.013 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1119274 ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1119274 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vxxe8e 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vxxe8e/tests/target /tmp/spdk.vxxe8e 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=606707712 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677722112 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189180669952 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963949056 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6783279104 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971941376 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981972480 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981349888 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=626688 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:10:05.014 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:05.015 * Looking for test storage... 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189180669952 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8997871616 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.015 --rc genhtml_branch_coverage=1 00:10:05.015 --rc genhtml_function_coverage=1 00:10:05.015 --rc genhtml_legend=1 00:10:05.015 --rc geninfo_all_blocks=1 00:10:05.015 --rc geninfo_unexecuted_blocks=1 00:10:05.015 00:10:05.015 ' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.015 --rc genhtml_branch_coverage=1 00:10:05.015 --rc genhtml_function_coverage=1 00:10:05.015 --rc genhtml_legend=1 00:10:05.015 --rc geninfo_all_blocks=1 00:10:05.015 --rc geninfo_unexecuted_blocks=1 00:10:05.015 00:10:05.015 ' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.015 --rc genhtml_branch_coverage=1 00:10:05.015 --rc genhtml_function_coverage=1 00:10:05.015 --rc genhtml_legend=1 00:10:05.015 --rc geninfo_all_blocks=1 00:10:05.015 --rc geninfo_unexecuted_blocks=1 00:10:05.015 00:10:05.015 ' 00:10:05.015 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.015 --rc genhtml_branch_coverage=1 00:10:05.016 --rc genhtml_function_coverage=1 00:10:05.016 --rc genhtml_legend=1 00:10:05.016 --rc geninfo_all_blocks=1 00:10:05.016 --rc geninfo_unexecuted_blocks=1 00:10:05.016 00:10:05.016 ' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.016 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.276 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:11.846 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:11.847 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:11.847 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:11.847 Found net devices under 0000:86:00.0: cvl_0_0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:11.847 Found net devices under 0000:86:00.1: cvl_0_1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:10:11.847 00:10:11.847 --- 10.0.0.2 ping statistics --- 00:10:11.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.847 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:10:11.847 00:10:11.847 --- 10.0.0.1 ping statistics --- 00:10:11.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.847 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.847 ************************************ 00:10:11.847 START TEST nvmf_filesystem_no_in_capsule 00:10:11.847 ************************************ 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:11.847 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1122528 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1122528 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1122528 ']' 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.848 12:50:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.848 [2024-10-15 12:50:31.489584] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:10:11.848 [2024-10-15 12:50:31.489668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.848 [2024-10-15 12:50:31.564196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.848 [2024-10-15 12:50:31.607354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.848 [2024-10-15 12:50:31.607395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.848 [2024-10-15 12:50:31.607402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.848 [2024-10-15 12:50:31.607407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.848 [2024-10-15 12:50:31.607413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.848 [2024-10-15 12:50:31.608910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.848 [2024-10-15 12:50:31.608931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.848 [2024-10-15 12:50:31.609019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.848 [2024-10-15 12:50:31.609021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.107 [2024-10-15 12:50:32.363712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.107 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 Malloc1 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 [2024-10-15 12:50:32.509180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:12.367 { 00:10:12.367 "name": "Malloc1", 00:10:12.367 "aliases": [ 00:10:12.367 "5e0928f7-c0f9-4172-802d-53dcae5d2372" 00:10:12.367 ], 00:10:12.367 "product_name": "Malloc disk", 00:10:12.367 "block_size": 512, 00:10:12.367 "num_blocks": 1048576, 00:10:12.367 "uuid": "5e0928f7-c0f9-4172-802d-53dcae5d2372", 00:10:12.367 "assigned_rate_limits": { 00:10:12.367 "rw_ios_per_sec": 0, 00:10:12.367 "rw_mbytes_per_sec": 0, 00:10:12.367 "r_mbytes_per_sec": 0, 00:10:12.367 "w_mbytes_per_sec": 0 00:10:12.367 }, 00:10:12.367 "claimed": true, 00:10:12.367 "claim_type": "exclusive_write", 00:10:12.367 "zoned": false, 00:10:12.367 "supported_io_types": { 00:10:12.367 "read": true, 00:10:12.367 "write": true, 00:10:12.367 "unmap": true, 00:10:12.367 "flush": true, 00:10:12.367 "reset": true, 00:10:12.367 "nvme_admin": false, 00:10:12.367 "nvme_io": false, 00:10:12.367 "nvme_io_md": false, 00:10:12.367 "write_zeroes": true, 00:10:12.367 "zcopy": true, 00:10:12.367 "get_zone_info": false, 00:10:12.367 "zone_management": false, 00:10:12.367 "zone_append": false, 00:10:12.367 "compare": false, 00:10:12.367 "compare_and_write": false, 00:10:12.367 "abort": true, 00:10:12.367 "seek_hole": false, 00:10:12.367 "seek_data": false, 00:10:12.367 "copy": true, 00:10:12.367 "nvme_iov_md": false 00:10:12.367 }, 00:10:12.367 "memory_domains": [ 00:10:12.367 { 00:10:12.367 "dma_device_id": "system", 00:10:12.367 "dma_device_type": 1 00:10:12.367 }, 00:10:12.367 { 00:10:12.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.367 "dma_device_type": 2 00:10:12.367 } 00:10:12.367 ], 00:10:12.367 "driver_specific": {} 00:10:12.367 } 00:10:12.367 ]' 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.367 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.813 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.813 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.813 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.813 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:13.813 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:15.716 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:15.717 12:50:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:15.975 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.543 12:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.920 ************************************ 00:10:17.920 START TEST filesystem_ext4 00:10:17.920 ************************************ 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:17.920 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.920 mke2fs 1.47.0 (5-Feb-2023) 00:10:17.920 Discarding device blocks: 0/522240 done 00:10:17.920 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.920 Filesystem UUID: 117db9a1-bfc5-43cb-a301-7e2675d3d98d 00:10:17.920 Superblock backups stored on blocks: 00:10:17.920 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.920 00:10:17.920 Allocating group tables: 0/64 done 00:10:17.920 Writing inode tables: 0/64 done 00:10:18.179 Creating journal (8192 blocks): done 00:10:20.334 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:10:20.334 00:10:20.334 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:20.334 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:25.608 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1122528 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.867 00:10:25.867 real 0m8.058s 00:10:25.867 user 0m0.030s 00:10:25.867 sys 0m0.072s 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:25.867 ************************************ 00:10:25.867 END TEST filesystem_ext4 00:10:25.867 ************************************ 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.867 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.867 ************************************ 00:10:25.867 START TEST filesystem_btrfs 00:10:25.867 ************************************ 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:25.867 btrfs-progs v6.8.1 00:10:25.867 See https://btrfs.readthedocs.io for more information. 00:10:25.867 00:10:25.867 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:25.867 NOTE: several default settings have changed in version 5.15, please make sure 00:10:25.867 this does not affect your deployments: 00:10:25.867 - DUP for metadata (-m dup) 00:10:25.867 - enabled no-holes (-O no-holes) 00:10:25.867 - enabled free-space-tree (-R free-space-tree) 00:10:25.867 00:10:25.867 Label: (null) 00:10:25.867 UUID: a9d1c54d-3bc7-4b0e-b77c-05326c995a73 00:10:25.867 Node size: 16384 00:10:25.867 Sector size: 4096 (CPU page size: 4096) 00:10:25.867 Filesystem size: 510.00MiB 00:10:25.867 Block group profiles: 00:10:25.867 Data: single 8.00MiB 00:10:25.867 Metadata: DUP 32.00MiB 00:10:25.867 System: DUP 8.00MiB 00:10:25.867 SSD detected: yes 00:10:25.867 Zoned device: no 00:10:25.867 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:25.867 Checksum: crc32c 00:10:25.867 Number of devices: 1 00:10:25.867 Devices: 00:10:25.867 ID SIZE PATH 00:10:25.867 1 510.00MiB /dev/nvme0n1p1 00:10:25.867 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:25.867 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1122528 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.126 00:10:26.126 real 0m0.416s 00:10:26.126 user 0m0.032s 00:10:26.126 sys 0m0.108s 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.126 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.126 ************************************ 00:10:26.126 END TEST filesystem_btrfs 00:10:26.126 ************************************ 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.386 ************************************ 00:10:26.386 START TEST filesystem_xfs 00:10:26.386 ************************************ 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:26.386 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:26.386 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:26.386 = sectsz=512 attr=2, projid32bit=1 00:10:26.386 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:26.386 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:26.386 data = bsize=4096 blocks=130560, imaxpct=25 00:10:26.386 = sunit=0 swidth=0 blks 00:10:26.386 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:26.386 log =internal log bsize=4096 blocks=16384, version=2 00:10:26.386 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:26.386 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:27.323 Discarding blocks...Done. 00:10:27.323 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:27.323 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1122528 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:29.346 00:10:29.346 real 0m3.015s 00:10:29.346 user 0m0.035s 00:10:29.346 sys 0m0.063s 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:29.346 ************************************ 00:10:29.346 END TEST filesystem_xfs 00:10:29.346 ************************************ 00:10:29.346 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1122528 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1122528 ']' 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1122528 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.648 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1122528 00:10:29.907 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.907 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.907 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1122528' 00:10:29.907 killing process with pid 1122528 00:10:29.907 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1122528 00:10:29.907 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1122528 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:30.166 00:10:30.166 real 0m18.900s 00:10:30.166 user 1m14.616s 00:10:30.166 sys 0m1.424s 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.166 ************************************ 00:10:30.166 END TEST nvmf_filesystem_no_in_capsule 00:10:30.166 ************************************ 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.166 ************************************ 00:10:30.166 START TEST nvmf_filesystem_in_capsule 00:10:30.166 ************************************ 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1125765 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1125765 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1125765 ']' 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.166 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.166 [2024-10-15 12:50:50.463457] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:10:30.166 [2024-10-15 12:50:50.463500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.425 [2024-10-15 12:50:50.532896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.425 [2024-10-15 12:50:50.575178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.425 [2024-10-15 12:50:50.575214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.425 [2024-10-15 12:50:50.575221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.425 [2024-10-15 12:50:50.575227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.425 [2024-10-15 12:50:50.575236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.425 [2024-10-15 12:50:50.576811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.425 [2024-10-15 12:50:50.576920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.425 [2024-10-15 12:50:50.577027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.425 [2024-10-15 12:50:50.577028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.425 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.425 [2024-10-15 12:50:50.709035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.426 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.426 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:30.426 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.426 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 Malloc1 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 [2024-10-15 12:50:50.846320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:30.685 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:30.686 { 00:10:30.686 "name": "Malloc1", 00:10:30.686 "aliases": [ 00:10:30.686 "c92fb3a9-bda9-44df-86b6-2acfe29916bc" 00:10:30.686 ], 00:10:30.686 "product_name": "Malloc disk", 00:10:30.686 "block_size": 512, 00:10:30.686 "num_blocks": 1048576, 00:10:30.686 "uuid": "c92fb3a9-bda9-44df-86b6-2acfe29916bc", 00:10:30.686 "assigned_rate_limits": { 00:10:30.686 "rw_ios_per_sec": 0, 00:10:30.686 "rw_mbytes_per_sec": 0, 00:10:30.686 "r_mbytes_per_sec": 0, 00:10:30.686 "w_mbytes_per_sec": 0 00:10:30.686 }, 00:10:30.686 "claimed": true, 00:10:30.686 "claim_type": "exclusive_write", 00:10:30.686 "zoned": false, 00:10:30.686 "supported_io_types": { 00:10:30.686 "read": true, 00:10:30.686 "write": true, 00:10:30.686 "unmap": true, 00:10:30.686 "flush": true, 00:10:30.686 "reset": true, 00:10:30.686 "nvme_admin": false, 00:10:30.686 "nvme_io": false, 00:10:30.686 "nvme_io_md": false, 00:10:30.686 "write_zeroes": true, 00:10:30.686 "zcopy": true, 00:10:30.686 "get_zone_info": false, 00:10:30.686 "zone_management": false, 00:10:30.686 "zone_append": false, 00:10:30.686 "compare": false, 00:10:30.686 "compare_and_write": false, 00:10:30.686 "abort": true, 00:10:30.686 "seek_hole": false, 00:10:30.686 "seek_data": false, 00:10:30.686 "copy": true, 00:10:30.686 "nvme_iov_md": false 00:10:30.686 }, 00:10:30.686 "memory_domains": [ 00:10:30.686 { 00:10:30.686 "dma_device_id": "system", 00:10:30.686 "dma_device_type": 1 00:10:30.686 }, 00:10:30.686 { 00:10:30.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.686 "dma_device_type": 2 00:10:30.686 } 00:10:30.686 ], 00:10:30.686 "driver_specific": {} 00:10:30.686 } 00:10:30.686 ]' 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:30.686 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.064 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.064 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.064 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.064 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:32.064 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:33.970 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:34.229 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:34.229 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.605 ************************************ 00:10:35.605 START TEST filesystem_in_capsule_ext4 00:10:35.605 ************************************ 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:35.605 mke2fs 1.47.0 (5-Feb-2023) 00:10:35.605 Discarding device blocks: 0/522240 done 00:10:35.605 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:35.605 Filesystem UUID: ce3d00a8-ac7f-47b8-b92a-c32d41d9c14b 00:10:35.605 Superblock backups stored on blocks: 00:10:35.605 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:35.605 00:10:35.605 Allocating group tables: 0/64 done 00:10:35.605 Writing inode tables: 0/64 done 00:10:35.605 Creating journal (8192 blocks): done 00:10:35.605 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.605 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:35.605 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1125765 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.875 00:10:40.875 real 0m5.613s 00:10:40.875 user 0m0.024s 00:10:40.875 sys 0m0.075s 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.875 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 ************************************ 00:10:40.875 END TEST filesystem_in_capsule_ext4 00:10:40.875 ************************************ 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.135 ************************************ 00:10:41.135 START TEST filesystem_in_capsule_btrfs 00:10:41.135 ************************************ 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.135 btrfs-progs v6.8.1 00:10:41.135 See https://btrfs.readthedocs.io for more information. 00:10:41.135 00:10:41.135 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.135 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.135 this does not affect your deployments: 00:10:41.135 - DUP for metadata (-m dup) 00:10:41.135 - enabled no-holes (-O no-holes) 00:10:41.135 - enabled free-space-tree (-R free-space-tree) 00:10:41.135 00:10:41.135 Label: (null) 00:10:41.135 UUID: 4d1f7142-3ed4-4a7b-9a77-21500e86dee6 00:10:41.135 Node size: 16384 00:10:41.135 Sector size: 4096 (CPU page size: 4096) 00:10:41.135 Filesystem size: 510.00MiB 00:10:41.135 Block group profiles: 00:10:41.135 Data: single 8.00MiB 00:10:41.135 Metadata: DUP 32.00MiB 00:10:41.135 System: DUP 8.00MiB 00:10:41.135 SSD detected: yes 00:10:41.135 Zoned device: no 00:10:41.135 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.135 Checksum: crc32c 00:10:41.135 Number of devices: 1 00:10:41.135 Devices: 00:10:41.135 ID SIZE PATH 00:10:41.135 1 510.00MiB /dev/nvme0n1p1 00:10:41.135 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:41.135 12:51:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1125765 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.072 00:10:42.072 real 0m1.067s 00:10:42.072 user 0m0.030s 00:10:42.072 sys 0m0.112s 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.072 ************************************ 00:10:42.072 END TEST filesystem_in_capsule_btrfs 00:10:42.072 ************************************ 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.072 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 ************************************ 00:10:42.073 START TEST filesystem_in_capsule_xfs 00:10:42.073 ************************************ 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:42.073 12:51:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.332 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.332 = sectsz=512 attr=2, projid32bit=1 00:10:42.332 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.332 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.332 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.332 = sunit=0 swidth=0 blks 00:10:42.332 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.332 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.332 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.332 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.268 Discarding blocks...Done. 00:10:43.268 12:51:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:43.268 12:51:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.804 00:10:45.804 real 0m3.267s 00:10:45.804 user 0m0.028s 00:10:45.804 sys 0m0.071s 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.804 ************************************ 00:10:45.804 END TEST filesystem_in_capsule_xfs 00:10:45.804 ************************************ 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1125765 ']' 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125765' 00:10:45.804 killing process with pid 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1125765 00:10:45.804 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1125765 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.063 00:10:46.063 real 0m15.848s 00:10:46.063 user 1m2.323s 00:10:46.063 sys 0m1.365s 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.063 ************************************ 00:10:46.063 END TEST nvmf_filesystem_in_capsule 00:10:46.063 ************************************ 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.063 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.064 rmmod nvme_tcp 00:10:46.064 rmmod nvme_fabrics 00:10:46.064 rmmod nvme_keyring 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.064 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.599 00:10:48.599 real 0m43.565s 00:10:48.599 user 2m19.016s 00:10:48.599 sys 0m7.529s 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.599 ************************************ 00:10:48.599 END TEST nvmf_filesystem 00:10:48.599 ************************************ 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.599 ************************************ 00:10:48.599 START TEST nvmf_target_discovery 00:10:48.599 ************************************ 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.599 * Looking for test storage... 00:10:48.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.599 --rc genhtml_branch_coverage=1 00:10:48.599 --rc genhtml_function_coverage=1 00:10:48.599 --rc genhtml_legend=1 00:10:48.599 --rc geninfo_all_blocks=1 00:10:48.599 --rc geninfo_unexecuted_blocks=1 00:10:48.599 00:10:48.599 ' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.599 --rc genhtml_branch_coverage=1 00:10:48.599 --rc genhtml_function_coverage=1 00:10:48.599 --rc genhtml_legend=1 00:10:48.599 --rc geninfo_all_blocks=1 00:10:48.599 --rc geninfo_unexecuted_blocks=1 00:10:48.599 00:10:48.599 ' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.599 --rc genhtml_branch_coverage=1 00:10:48.599 --rc genhtml_function_coverage=1 00:10:48.599 --rc genhtml_legend=1 00:10:48.599 --rc geninfo_all_blocks=1 00:10:48.599 --rc geninfo_unexecuted_blocks=1 00:10:48.599 00:10:48.599 ' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.599 --rc genhtml_branch_coverage=1 00:10:48.599 --rc genhtml_function_coverage=1 00:10:48.599 --rc genhtml_legend=1 00:10:48.599 --rc geninfo_all_blocks=1 00:10:48.599 --rc geninfo_unexecuted_blocks=1 00:10:48.599 00:10:48.599 ' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.599 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.600 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.173 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:55.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:55.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:55.174 Found net devices under 0000:86:00.0: cvl_0_0 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:55.174 Found net devices under 0000:86:00.1: cvl_0_1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:10:55.174 00:10:55.174 --- 10.0.0.2 ping statistics --- 00:10:55.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.174 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:10:55.174 00:10:55.174 --- 10.0.0.1 ping statistics --- 00:10:55.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.174 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1132609 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1132609 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1132609 ']' 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.174 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.175 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.175 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 [2024-10-15 12:51:14.791487] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:10:55.175 [2024-10-15 12:51:14.791539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.175 [2024-10-15 12:51:14.863398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.175 [2024-10-15 12:51:14.906569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.175 [2024-10-15 12:51:14.906613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.175 [2024-10-15 12:51:14.906620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.175 [2024-10-15 12:51:14.906626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.175 [2024-10-15 12:51:14.906631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.175 [2024-10-15 12:51:14.908260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.175 [2024-10-15 12:51:14.908367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.175 [2024-10-15 12:51:14.908461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.175 [2024-10-15 12:51:14.908461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.175 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.175 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 [2024-10-15 12:51:15.045771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 Null1 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 [2024-10-15 12:51:15.091174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 Null2 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 Null3 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 Null4 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.175 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:55.175 00:10:55.175 Discovery Log Number of Records 6, Generation counter 6 00:10:55.175 =====Discovery Log Entry 0====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: current discovery subsystem 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4420 00:10:55.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: explicit discovery connections, duplicate discovery information 00:10:55.176 sectype: none 00:10:55.176 =====Discovery Log Entry 1====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: nvme subsystem 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4420 00:10:55.176 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: none 00:10:55.176 sectype: none 00:10:55.176 =====Discovery Log Entry 2====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: nvme subsystem 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4420 00:10:55.176 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: none 00:10:55.176 sectype: none 00:10:55.176 =====Discovery Log Entry 3====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: nvme subsystem 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4420 00:10:55.176 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: none 00:10:55.176 sectype: none 00:10:55.176 =====Discovery Log Entry 4====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: nvme subsystem 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4420 00:10:55.176 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: none 00:10:55.176 sectype: none 00:10:55.176 =====Discovery Log Entry 5====== 00:10:55.176 trtype: tcp 00:10:55.176 adrfam: ipv4 00:10:55.176 subtype: discovery subsystem referral 00:10:55.176 treq: not required 00:10:55.176 portid: 0 00:10:55.176 trsvcid: 4430 00:10:55.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:55.176 traddr: 10.0.0.2 00:10:55.176 eflags: none 00:10:55.176 sectype: none 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:55.176 Perform nvmf subsystem discovery via RPC 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.176 [ 00:10:55.176 { 00:10:55.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:55.176 "subtype": "Discovery", 00:10:55.176 "listen_addresses": [ 00:10:55.176 { 00:10:55.176 "trtype": "TCP", 00:10:55.176 "adrfam": "IPv4", 00:10:55.176 "traddr": "10.0.0.2", 00:10:55.176 "trsvcid": "4420" 00:10:55.176 } 00:10:55.176 ], 00:10:55.176 "allow_any_host": true, 00:10:55.176 "hosts": [] 00:10:55.176 }, 00:10:55.176 { 00:10:55.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.176 "subtype": "NVMe", 00:10:55.176 "listen_addresses": [ 00:10:55.176 { 00:10:55.176 "trtype": "TCP", 00:10:55.176 "adrfam": "IPv4", 00:10:55.176 "traddr": "10.0.0.2", 00:10:55.176 "trsvcid": "4420" 00:10:55.176 } 00:10:55.176 ], 00:10:55.176 "allow_any_host": true, 00:10:55.176 "hosts": [], 00:10:55.176 "serial_number": "SPDK00000000000001", 00:10:55.176 "model_number": "SPDK bdev Controller", 00:10:55.176 "max_namespaces": 32, 00:10:55.176 "min_cntlid": 1, 00:10:55.176 "max_cntlid": 65519, 00:10:55.176 "namespaces": [ 00:10:55.176 { 00:10:55.176 "nsid": 1, 00:10:55.176 "bdev_name": "Null1", 00:10:55.176 "name": "Null1", 00:10:55.176 "nguid": "26A6268BD0CB471E85701A31FD690778", 00:10:55.176 "uuid": "26a6268b-d0cb-471e-8570-1a31fd690778" 00:10:55.176 } 00:10:55.176 ] 00:10:55.176 }, 00:10:55.176 { 00:10:55.176 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:55.176 "subtype": "NVMe", 00:10:55.176 "listen_addresses": [ 00:10:55.176 { 00:10:55.176 "trtype": "TCP", 00:10:55.176 "adrfam": "IPv4", 00:10:55.176 "traddr": "10.0.0.2", 00:10:55.176 "trsvcid": "4420" 00:10:55.176 } 00:10:55.176 ], 00:10:55.176 "allow_any_host": true, 00:10:55.176 "hosts": [], 00:10:55.176 "serial_number": "SPDK00000000000002", 00:10:55.176 "model_number": "SPDK bdev Controller", 00:10:55.176 "max_namespaces": 32, 00:10:55.176 "min_cntlid": 1, 00:10:55.176 "max_cntlid": 65519, 00:10:55.176 "namespaces": [ 00:10:55.176 { 00:10:55.176 "nsid": 1, 00:10:55.176 "bdev_name": "Null2", 00:10:55.176 "name": "Null2", 00:10:55.176 "nguid": "F0ABFAF9A05C4DC79B51265FF874295F", 00:10:55.176 "uuid": "f0abfaf9-a05c-4dc7-9b51-265ff874295f" 00:10:55.176 } 00:10:55.176 ] 00:10:55.176 }, 00:10:55.176 { 00:10:55.176 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:55.176 "subtype": "NVMe", 00:10:55.176 "listen_addresses": [ 00:10:55.176 { 00:10:55.176 "trtype": "TCP", 00:10:55.176 "adrfam": "IPv4", 00:10:55.176 "traddr": "10.0.0.2", 00:10:55.176 "trsvcid": "4420" 00:10:55.176 } 00:10:55.176 ], 00:10:55.176 "allow_any_host": true, 00:10:55.176 "hosts": [], 00:10:55.176 "serial_number": "SPDK00000000000003", 00:10:55.176 "model_number": "SPDK bdev Controller", 00:10:55.176 "max_namespaces": 32, 00:10:55.176 "min_cntlid": 1, 00:10:55.176 "max_cntlid": 65519, 00:10:55.176 "namespaces": [ 00:10:55.176 { 00:10:55.176 "nsid": 1, 00:10:55.176 "bdev_name": "Null3", 00:10:55.176 "name": "Null3", 00:10:55.176 "nguid": "ECF73DD9E42C4EAEA939B1BA559D2D35", 00:10:55.176 "uuid": "ecf73dd9-e42c-4eae-a939-b1ba559d2d35" 00:10:55.176 } 00:10:55.176 ] 00:10:55.176 }, 00:10:55.176 { 00:10:55.176 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:55.176 "subtype": "NVMe", 00:10:55.176 "listen_addresses": [ 00:10:55.176 { 00:10:55.176 "trtype": "TCP", 00:10:55.176 "adrfam": "IPv4", 00:10:55.176 "traddr": "10.0.0.2", 00:10:55.176 "trsvcid": "4420" 00:10:55.176 } 00:10:55.176 ], 00:10:55.176 "allow_any_host": true, 00:10:55.176 "hosts": [], 00:10:55.176 "serial_number": "SPDK00000000000004", 00:10:55.176 "model_number": "SPDK bdev Controller", 00:10:55.176 "max_namespaces": 32, 00:10:55.176 "min_cntlid": 1, 00:10:55.176 "max_cntlid": 65519, 00:10:55.176 "namespaces": [ 00:10:55.176 { 00:10:55.176 "nsid": 1, 00:10:55.176 "bdev_name": "Null4", 00:10:55.176 "name": "Null4", 00:10:55.176 "nguid": "1DDD3B6C1DAE46F396E8BD552B8BA1E6", 00:10:55.176 "uuid": "1ddd3b6c-1dae-46f3-96e8-bd552b8ba1e6" 00:10:55.176 } 00:10:55.176 ] 00:10:55.176 } 00:10:55.176 ] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.176 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.436 rmmod nvme_tcp 00:10:55.436 rmmod nvme_fabrics 00:10:55.436 rmmod nvme_keyring 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1132609 ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1132609 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1132609 ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1132609 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132609 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132609' 00:10:55.436 killing process with pid 1132609 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1132609 00:10:55.436 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1132609 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.696 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.603 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.603 00:10:57.603 real 0m9.390s 00:10:57.603 user 0m5.544s 00:10:57.603 sys 0m4.908s 00:10:57.603 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.603 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.603 ************************************ 00:10:57.603 END TEST nvmf_target_discovery 00:10:57.603 ************************************ 00:10:57.860 12:51:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:57.860 12:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.860 12:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.860 12:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.860 ************************************ 00:10:57.860 START TEST nvmf_referrals 00:10:57.860 ************************************ 00:10:57.860 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:57.860 * Looking for test storage... 00:10:57.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:57.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.860 --rc genhtml_branch_coverage=1 00:10:57.860 --rc genhtml_function_coverage=1 00:10:57.860 --rc genhtml_legend=1 00:10:57.860 --rc geninfo_all_blocks=1 00:10:57.860 --rc geninfo_unexecuted_blocks=1 00:10:57.860 00:10:57.860 ' 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:57.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.860 --rc genhtml_branch_coverage=1 00:10:57.860 --rc genhtml_function_coverage=1 00:10:57.860 --rc genhtml_legend=1 00:10:57.860 --rc geninfo_all_blocks=1 00:10:57.860 --rc geninfo_unexecuted_blocks=1 00:10:57.860 00:10:57.860 ' 00:10:57.860 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:57.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.860 --rc genhtml_branch_coverage=1 00:10:57.860 --rc genhtml_function_coverage=1 00:10:57.860 --rc genhtml_legend=1 00:10:57.860 --rc geninfo_all_blocks=1 00:10:57.861 --rc geninfo_unexecuted_blocks=1 00:10:57.861 00:10:57.861 ' 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:57.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.861 --rc genhtml_branch_coverage=1 00:10:57.861 --rc genhtml_function_coverage=1 00:10:57.861 --rc genhtml_legend=1 00:10:57.861 --rc geninfo_all_blocks=1 00:10:57.861 --rc geninfo_unexecuted_blocks=1 00:10:57.861 00:10:57.861 ' 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.861 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:58.119 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.120 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.693 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.694 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.694 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.694 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:11:04.694 00:11:04.694 --- 10.0.0.2 ping statistics --- 00:11:04.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.694 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:11:04.694 00:11:04.694 --- 10.0.0.1 ping statistics --- 00:11:04.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.694 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1136335 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1136335 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1136335 ']' 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.694 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.694 [2024-10-15 12:51:24.232534] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:11:04.694 [2024-10-15 12:51:24.232577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.694 [2024-10-15 12:51:24.304847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.694 [2024-10-15 12:51:24.346749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.694 [2024-10-15 12:51:24.346785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.694 [2024-10-15 12:51:24.346792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.694 [2024-10-15 12:51:24.346797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.694 [2024-10-15 12:51:24.346802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.694 [2024-10-15 12:51:24.348285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.694 [2024-10-15 12:51:24.348393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.694 [2024-10-15 12:51:24.348494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.695 [2024-10-15 12:51:24.348494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 [2024-10-15 12:51:24.484246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 [2024-10-15 12:51:24.497558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.695 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.955 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.956 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.215 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.474 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.733 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.734 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:05.993 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:05.993 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:05.993 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:05.994 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:05.994 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.994 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.253 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.511 rmmod nvme_tcp 00:11:06.511 rmmod nvme_fabrics 00:11:06.511 rmmod nvme_keyring 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1136335 ']' 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1136335 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1136335 ']' 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1136335 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136335 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136335' 00:11:06.511 killing process with pid 1136335 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1136335 00:11:06.511 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1136335 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.769 12:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.672 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.672 00:11:08.672 real 0m10.989s 00:11:08.672 user 0m12.752s 00:11:08.672 sys 0m5.239s 00:11:08.672 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.672 12:51:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.672 ************************************ 00:11:08.672 END TEST nvmf_referrals 00:11:08.672 ************************************ 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.932 ************************************ 00:11:08.932 START TEST nvmf_connect_disconnect 00:11:08.932 ************************************ 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:08.932 * Looking for test storage... 00:11:08.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.932 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.933 --rc genhtml_branch_coverage=1 00:11:08.933 --rc genhtml_function_coverage=1 00:11:08.933 --rc genhtml_legend=1 00:11:08.933 --rc geninfo_all_blocks=1 00:11:08.933 --rc geninfo_unexecuted_blocks=1 00:11:08.933 00:11:08.933 ' 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.933 --rc genhtml_branch_coverage=1 00:11:08.933 --rc genhtml_function_coverage=1 00:11:08.933 --rc genhtml_legend=1 00:11:08.933 --rc geninfo_all_blocks=1 00:11:08.933 --rc geninfo_unexecuted_blocks=1 00:11:08.933 00:11:08.933 ' 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.933 --rc genhtml_branch_coverage=1 00:11:08.933 --rc genhtml_function_coverage=1 00:11:08.933 --rc genhtml_legend=1 00:11:08.933 --rc geninfo_all_blocks=1 00:11:08.933 --rc geninfo_unexecuted_blocks=1 00:11:08.933 00:11:08.933 ' 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.933 --rc genhtml_branch_coverage=1 00:11:08.933 --rc genhtml_function_coverage=1 00:11:08.933 --rc genhtml_legend=1 00:11:08.933 --rc geninfo_all_blocks=1 00:11:08.933 --rc geninfo_unexecuted_blocks=1 00:11:08.933 00:11:08.933 ' 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:08.933 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.193 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.766 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.766 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.766 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:11:15.767 00:11:15.767 --- 10.0.0.2 ping statistics --- 00:11:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.767 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:11:15.767 00:11:15.767 --- 10.0.0.1 ping statistics --- 00:11:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.767 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1140412 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1140412 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1140412 ']' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 [2024-10-15 12:51:35.328245] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:11:15.767 [2024-10-15 12:51:35.328288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.767 [2024-10-15 12:51:35.401595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.767 [2024-10-15 12:51:35.441207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.767 [2024-10-15 12:51:35.441245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.767 [2024-10-15 12:51:35.441251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.767 [2024-10-15 12:51:35.441257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.767 [2024-10-15 12:51:35.441261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.767 [2024-10-15 12:51:35.442683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.767 [2024-10-15 12:51:35.442791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.767 [2024-10-15 12:51:35.442876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.767 [2024-10-15 12:51:35.442877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 [2024-10-15 12:51:35.591749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.767 [2024-10-15 12:51:35.653229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:15.767 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:19.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.223 12:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.223 rmmod nvme_tcp 00:11:32.223 rmmod nvme_fabrics 00:11:32.223 rmmod nvme_keyring 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1140412 ']' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1140412 ']' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1140412' 00:11:32.223 killing process with pid 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1140412 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.223 12:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.128 00:11:34.128 real 0m25.306s 00:11:34.128 user 1m8.614s 00:11:34.128 sys 0m5.797s 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 ************************************ 00:11:34.128 END TEST nvmf_connect_disconnect 00:11:34.128 ************************************ 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 ************************************ 00:11:34.128 START TEST nvmf_multitarget 00:11:34.128 ************************************ 00:11:34.128 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:34.388 * Looking for test storage... 00:11:34.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.388 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:34.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.389 --rc genhtml_branch_coverage=1 00:11:34.389 --rc genhtml_function_coverage=1 00:11:34.389 --rc genhtml_legend=1 00:11:34.389 --rc geninfo_all_blocks=1 00:11:34.389 --rc geninfo_unexecuted_blocks=1 00:11:34.389 00:11:34.389 ' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:34.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.389 --rc genhtml_branch_coverage=1 00:11:34.389 --rc genhtml_function_coverage=1 00:11:34.389 --rc genhtml_legend=1 00:11:34.389 --rc geninfo_all_blocks=1 00:11:34.389 --rc geninfo_unexecuted_blocks=1 00:11:34.389 00:11:34.389 ' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:34.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.389 --rc genhtml_branch_coverage=1 00:11:34.389 --rc genhtml_function_coverage=1 00:11:34.389 --rc genhtml_legend=1 00:11:34.389 --rc geninfo_all_blocks=1 00:11:34.389 --rc geninfo_unexecuted_blocks=1 00:11:34.389 00:11:34.389 ' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:34.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.389 --rc genhtml_branch_coverage=1 00:11:34.389 --rc genhtml_function_coverage=1 00:11:34.389 --rc genhtml_legend=1 00:11:34.389 --rc geninfo_all_blocks=1 00:11:34.389 --rc geninfo_unexecuted_blocks=1 00:11:34.389 00:11:34.389 ' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.389 12:51:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:40.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:40.959 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:40.959 Found net devices under 0000:86:00.0: cvl_0_0 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:40.959 Found net devices under 0000:86:00.1: cvl_0_1 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.959 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:11:40.960 00:11:40.960 --- 10.0.0.2 ping statistics --- 00:11:40.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.960 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:40.960 00:11:40.960 --- 10.0.0.1 ping statistics --- 00:11:40.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.960 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1146809 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1146809 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1146809 ']' 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.960 [2024-10-15 12:52:00.667352] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:11:40.960 [2024-10-15 12:52:00.667403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.960 [2024-10-15 12:52:00.738947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.960 [2024-10-15 12:52:00.781219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.960 [2024-10-15 12:52:00.781257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.960 [2024-10-15 12:52:00.781265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.960 [2024-10-15 12:52:00.781271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.960 [2024-10-15 12:52:00.781276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.960 [2024-10-15 12:52:00.782854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.960 [2024-10-15 12:52:00.782963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.960 [2024-10-15 12:52:00.783070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.960 [2024-10-15 12:52:00.783071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:40.960 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:40.960 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:40.960 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:40.960 "nvmf_tgt_1" 00:11:40.960 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:40.960 "nvmf_tgt_2" 00:11:40.960 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:40.960 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:41.219 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:41.219 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:41.219 true 00:11:41.219 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:41.478 true 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.478 rmmod nvme_tcp 00:11:41.478 rmmod nvme_fabrics 00:11:41.478 rmmod nvme_keyring 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1146809 ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1146809 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1146809 ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1146809 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146809 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146809' 00:11:41.478 killing process with pid 1146809 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1146809 00:11:41.478 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1146809 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.737 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.274 00:11:44.274 real 0m9.595s 00:11:44.274 user 0m7.193s 00:11:44.274 sys 0m4.886s 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 ************************************ 00:11:44.274 END TEST nvmf_multitarget 00:11:44.274 ************************************ 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 ************************************ 00:11:44.274 START TEST nvmf_rpc 00:11:44.274 ************************************ 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:44.274 * Looking for test storage... 00:11:44.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:44.274 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.275 --rc genhtml_branch_coverage=1 00:11:44.275 --rc genhtml_function_coverage=1 00:11:44.275 --rc genhtml_legend=1 00:11:44.275 --rc geninfo_all_blocks=1 00:11:44.275 --rc geninfo_unexecuted_blocks=1 00:11:44.275 00:11:44.275 ' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.275 --rc genhtml_branch_coverage=1 00:11:44.275 --rc genhtml_function_coverage=1 00:11:44.275 --rc genhtml_legend=1 00:11:44.275 --rc geninfo_all_blocks=1 00:11:44.275 --rc geninfo_unexecuted_blocks=1 00:11:44.275 00:11:44.275 ' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.275 --rc genhtml_branch_coverage=1 00:11:44.275 --rc genhtml_function_coverage=1 00:11:44.275 --rc genhtml_legend=1 00:11:44.275 --rc geninfo_all_blocks=1 00:11:44.275 --rc geninfo_unexecuted_blocks=1 00:11:44.275 00:11:44.275 ' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.275 --rc genhtml_branch_coverage=1 00:11:44.275 --rc genhtml_function_coverage=1 00:11:44.275 --rc genhtml_legend=1 00:11:44.275 --rc geninfo_all_blocks=1 00:11:44.275 --rc geninfo_unexecuted_blocks=1 00:11:44.275 00:11:44.275 ' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.275 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:50.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:50.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:50.850 Found net devices under 0000:86:00.0: cvl_0_0 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:50.850 Found net devices under 0000:86:00.1: cvl_0_1 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.850 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.851 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:11:50.851 00:11:50.851 --- 10.0.0.2 ping statistics --- 00:11:50.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.851 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:11:50.851 00:11:50.851 --- 10.0.0.1 ping statistics --- 00:11:50.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.851 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1150553 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1150553 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1150553 ']' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 [2024-10-15 12:52:10.315123] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:11:50.851 [2024-10-15 12:52:10.315169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.851 [2024-10-15 12:52:10.368928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.851 [2024-10-15 12:52:10.411216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.851 [2024-10-15 12:52:10.411251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.851 [2024-10-15 12:52:10.411258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.851 [2024-10-15 12:52:10.411264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.851 [2024-10-15 12:52:10.411269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.851 [2024-10-15 12:52:10.412818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.851 [2024-10-15 12:52:10.412927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.851 [2024-10-15 12:52:10.413038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.851 [2024-10-15 12:52:10.413039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:50.851 "tick_rate": 2100000000, 00:11:50.851 "poll_groups": [ 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_000", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [] 00:11:50.851 }, 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_001", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [] 00:11:50.851 }, 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_002", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [] 00:11:50.851 }, 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_003", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [] 00:11:50.851 } 00:11:50.851 ] 00:11:50.851 }' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 [2024-10-15 12:52:10.657153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.851 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:50.851 "tick_rate": 2100000000, 00:11:50.851 "poll_groups": [ 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_000", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [ 00:11:50.851 { 00:11:50.851 "trtype": "TCP" 00:11:50.851 } 00:11:50.851 ] 00:11:50.851 }, 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_001", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.851 "completed_nvme_io": 0, 00:11:50.851 "transports": [ 00:11:50.851 { 00:11:50.851 "trtype": "TCP" 00:11:50.851 } 00:11:50.851 ] 00:11:50.851 }, 00:11:50.851 { 00:11:50.851 "name": "nvmf_tgt_poll_group_002", 00:11:50.851 "admin_qpairs": 0, 00:11:50.851 "io_qpairs": 0, 00:11:50.851 "current_admin_qpairs": 0, 00:11:50.851 "current_io_qpairs": 0, 00:11:50.851 "pending_bdev_io": 0, 00:11:50.852 "completed_nvme_io": 0, 00:11:50.852 "transports": [ 00:11:50.852 { 00:11:50.852 "trtype": "TCP" 00:11:50.852 } 00:11:50.852 ] 00:11:50.852 }, 00:11:50.852 { 00:11:50.852 "name": "nvmf_tgt_poll_group_003", 00:11:50.852 "admin_qpairs": 0, 00:11:50.852 "io_qpairs": 0, 00:11:50.852 "current_admin_qpairs": 0, 00:11:50.852 "current_io_qpairs": 0, 00:11:50.852 "pending_bdev_io": 0, 00:11:50.852 "completed_nvme_io": 0, 00:11:50.852 "transports": [ 00:11:50.852 { 00:11:50.852 "trtype": "TCP" 00:11:50.852 } 00:11:50.852 ] 00:11:50.852 } 00:11:50.852 ] 00:11:50.852 }' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 Malloc1 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 [2024-10-15 12:52:10.839046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:50.852 [2024-10-15 12:52:10.875790] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:50.852 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:50.852 could not add new controller: failed to write to nvme-fabrics device 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.852 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.789 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.789 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.789 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.789 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:51.789 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:54.324 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.325 [2024-10-15 12:52:14.247707] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:54.325 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:54.325 could not add new controller: failed to write to nvme-fabrics device 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.325 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.264 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.264 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.264 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.264 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.264 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:57.172 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.431 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.432 [2024-10-15 12:52:17.641279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.432 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.822 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.822 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.822 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.822 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:58.822 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 [2024-10-15 12:52:20.898571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.727 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.107 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.107 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.107 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.107 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.107 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 [2024-10-15 12:52:24.162003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.014 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.954 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.954 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:04.954 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.954 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:04.954 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:07.494 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 [2024-10-15 12:52:27.462635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.495 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.434 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.434 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.434 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.434 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.434 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:10.341 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 [2024-10-15 12:52:30.762387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.601 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.048 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.048 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.048 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.048 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.048 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:14.058 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.059 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-15 12:52:34.068932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-15 12:52:34.117029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-15 12:52:34.165176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-15 12:52:34.213340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-10-15 12:52:34.261522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:14.059 "tick_rate": 2100000000, 00:12:14.059 "poll_groups": [ 00:12:14.059 { 00:12:14.059 "name": "nvmf_tgt_poll_group_000", 00:12:14.059 "admin_qpairs": 2, 00:12:14.059 "io_qpairs": 168, 00:12:14.059 "current_admin_qpairs": 0, 00:12:14.059 "current_io_qpairs": 0, 00:12:14.059 "pending_bdev_io": 0, 00:12:14.059 "completed_nvme_io": 277, 00:12:14.059 "transports": [ 00:12:14.059 { 00:12:14.059 "trtype": "TCP" 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "nvmf_tgt_poll_group_001", 00:12:14.059 "admin_qpairs": 2, 00:12:14.059 "io_qpairs": 168, 00:12:14.059 "current_admin_qpairs": 0, 00:12:14.059 "current_io_qpairs": 0, 00:12:14.059 "pending_bdev_io": 0, 00:12:14.059 "completed_nvme_io": 205, 00:12:14.059 "transports": [ 00:12:14.059 { 00:12:14.059 "trtype": "TCP" 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "nvmf_tgt_poll_group_002", 00:12:14.059 "admin_qpairs": 1, 00:12:14.059 "io_qpairs": 168, 00:12:14.059 "current_admin_qpairs": 0, 00:12:14.059 "current_io_qpairs": 0, 00:12:14.059 "pending_bdev_io": 0, 00:12:14.059 "completed_nvme_io": 274, 00:12:14.059 "transports": [ 00:12:14.059 { 00:12:14.059 "trtype": "TCP" 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 }, 00:12:14.059 { 00:12:14.059 "name": "nvmf_tgt_poll_group_003", 00:12:14.059 "admin_qpairs": 2, 00:12:14.059 "io_qpairs": 168, 00:12:14.059 "current_admin_qpairs": 0, 00:12:14.059 "current_io_qpairs": 0, 00:12:14.059 "pending_bdev_io": 0, 00:12:14.059 "completed_nvme_io": 266, 00:12:14.059 "transports": [ 00:12:14.059 { 00:12:14.059 "trtype": "TCP" 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 } 00:12:14.059 ] 00:12:14.059 }' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:14.059 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.319 rmmod nvme_tcp 00:12:14.319 rmmod nvme_fabrics 00:12:14.319 rmmod nvme_keyring 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1150553 ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1150553 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1150553 ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1150553 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1150553 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1150553' 00:12:14.319 killing process with pid 1150553 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1150553 00:12:14.319 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1150553 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.577 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.578 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.483 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.483 00:12:16.483 real 0m32.685s 00:12:16.483 user 1m38.556s 00:12:16.483 sys 0m6.388s 00:12:16.483 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.483 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.483 ************************************ 00:12:16.483 END TEST nvmf_rpc 00:12:16.483 ************************************ 00:12:16.742 12:52:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.743 ************************************ 00:12:16.743 START TEST nvmf_invalid 00:12:16.743 ************************************ 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:16.743 * Looking for test storage... 00:12:16.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:16.743 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.743 --rc genhtml_branch_coverage=1 00:12:16.743 --rc genhtml_function_coverage=1 00:12:16.743 --rc genhtml_legend=1 00:12:16.743 --rc geninfo_all_blocks=1 00:12:16.743 --rc geninfo_unexecuted_blocks=1 00:12:16.743 00:12:16.743 ' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.743 --rc genhtml_branch_coverage=1 00:12:16.743 --rc genhtml_function_coverage=1 00:12:16.743 --rc genhtml_legend=1 00:12:16.743 --rc geninfo_all_blocks=1 00:12:16.743 --rc geninfo_unexecuted_blocks=1 00:12:16.743 00:12:16.743 ' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.743 --rc genhtml_branch_coverage=1 00:12:16.743 --rc genhtml_function_coverage=1 00:12:16.743 --rc genhtml_legend=1 00:12:16.743 --rc geninfo_all_blocks=1 00:12:16.743 --rc geninfo_unexecuted_blocks=1 00:12:16.743 00:12:16.743 ' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.743 --rc genhtml_branch_coverage=1 00:12:16.743 --rc genhtml_function_coverage=1 00:12:16.743 --rc genhtml_legend=1 00:12:16.743 --rc geninfo_all_blocks=1 00:12:16.743 --rc geninfo_unexecuted_blocks=1 00:12:16.743 00:12:16.743 ' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.743 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.003 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.572 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.572 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.572 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:12:23.572 00:12:23.572 --- 10.0.0.2 ping statistics --- 00:12:23.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.572 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:12:23.572 00:12:23.572 --- 10.0.0.1 ping statistics --- 00:12:23.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.572 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:23.572 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1158208 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1158208 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1158208 ']' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 [2024-10-15 12:52:43.115877] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:12:23.573 [2024-10-15 12:52:43.115926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.573 [2024-10-15 12:52:43.189325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.573 [2024-10-15 12:52:43.231928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.573 [2024-10-15 12:52:43.231966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.573 [2024-10-15 12:52:43.231973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.573 [2024-10-15 12:52:43.231979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.573 [2024-10-15 12:52:43.231984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.573 [2024-10-15 12:52:43.233500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.573 [2024-10-15 12:52:43.233624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.573 [2024-10-15 12:52:43.233692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.573 [2024-10-15 12:52:43.233692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11702 00:12:23.573 [2024-10-15 12:52:43.535119] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:23.573 { 00:12:23.573 "nqn": "nqn.2016-06.io.spdk:cnode11702", 00:12:23.573 "tgt_name": "foobar", 00:12:23.573 "method": "nvmf_create_subsystem", 00:12:23.573 "req_id": 1 00:12:23.573 } 00:12:23.573 Got JSON-RPC error response 00:12:23.573 response: 00:12:23.573 { 00:12:23.573 "code": -32603, 00:12:23.573 "message": "Unable to find target foobar" 00:12:23.573 }' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:23.573 { 00:12:23.573 "nqn": "nqn.2016-06.io.spdk:cnode11702", 00:12:23.573 "tgt_name": "foobar", 00:12:23.573 "method": "nvmf_create_subsystem", 00:12:23.573 "req_id": 1 00:12:23.573 } 00:12:23.573 Got JSON-RPC error response 00:12:23.573 response: 00:12:23.573 { 00:12:23.573 "code": -32603, 00:12:23.573 "message": "Unable to find target foobar" 00:12:23.573 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7478 00:12:23.573 [2024-10-15 12:52:43.743842] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7478: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:23.573 { 00:12:23.573 "nqn": "nqn.2016-06.io.spdk:cnode7478", 00:12:23.573 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:23.573 "method": "nvmf_create_subsystem", 00:12:23.573 "req_id": 1 00:12:23.573 } 00:12:23.573 Got JSON-RPC error response 00:12:23.573 response: 00:12:23.573 { 00:12:23.573 "code": -32602, 00:12:23.573 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:23.573 }' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:23.573 { 00:12:23.573 "nqn": "nqn.2016-06.io.spdk:cnode7478", 00:12:23.573 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:23.573 "method": "nvmf_create_subsystem", 00:12:23.573 "req_id": 1 00:12:23.573 } 00:12:23.573 Got JSON-RPC error response 00:12:23.573 response: 00:12:23.573 { 00:12:23.573 "code": -32602, 00:12:23.573 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:23.573 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:23.573 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28764 00:12:23.833 [2024-10-15 12:52:43.956525] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28764: invalid model number 'SPDK_Controller' 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:23.833 { 00:12:23.833 "nqn": "nqn.2016-06.io.spdk:cnode28764", 00:12:23.833 "model_number": "SPDK_Controller\u001f", 00:12:23.833 "method": "nvmf_create_subsystem", 00:12:23.833 "req_id": 1 00:12:23.833 } 00:12:23.833 Got JSON-RPC error response 00:12:23.833 response: 00:12:23.833 { 00:12:23.833 "code": -32602, 00:12:23.833 "message": "Invalid MN SPDK_Controller\u001f" 00:12:23.833 }' 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:23.833 { 00:12:23.833 "nqn": "nqn.2016-06.io.spdk:cnode28764", 00:12:23.833 "model_number": "SPDK_Controller\u001f", 00:12:23.833 "method": "nvmf_create_subsystem", 00:12:23.833 "req_id": 1 00:12:23.833 } 00:12:23.833 Got JSON-RPC error response 00:12:23.833 response: 00:12:23.833 { 00:12:23.833 "code": -32602, 00:12:23.833 "message": "Invalid MN SPDK_Controller\u001f" 00:12:23.833 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:23.833 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'EGxU0JfCHU-670oY0'\''QB!' 00:12:23.834 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'EGxU0JfCHU-670oY0'\''QB!' nqn.2016-06.io.spdk:cnode23156 00:12:24.093 [2024-10-15 12:52:44.309709] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23156: invalid serial number 'EGxU0JfCHU-670oY0'QB!' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:24.093 { 00:12:24.093 "nqn": "nqn.2016-06.io.spdk:cnode23156", 00:12:24.093 "serial_number": "EGxU0JfCHU-670oY0'\''QB!", 00:12:24.093 "method": "nvmf_create_subsystem", 00:12:24.093 "req_id": 1 00:12:24.093 } 00:12:24.093 Got JSON-RPC error response 00:12:24.093 response: 00:12:24.093 { 00:12:24.093 "code": -32602, 00:12:24.093 "message": "Invalid SN EGxU0JfCHU-670oY0'\''QB!" 00:12:24.093 }' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:24.093 { 00:12:24.093 "nqn": "nqn.2016-06.io.spdk:cnode23156", 00:12:24.093 "serial_number": "EGxU0JfCHU-670oY0'QB!", 00:12:24.093 "method": "nvmf_create_subsystem", 00:12:24.093 "req_id": 1 00:12:24.093 } 00:12:24.093 Got JSON-RPC error response 00:12:24.093 response: 00:12:24.093 { 00:12:24.093 "code": -32602, 00:12:24.093 "message": "Invalid SN EGxU0JfCHU-670oY0'QB!" 00:12:24.093 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:24.093 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.094 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:24.353 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:24.353 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:24.353 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.353 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.354 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W,<_d=C$dTB' 00:12:24.355 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W,<_d=C$dTB' nqn.2016-06.io.spdk:cnode30260 00:12:24.614 [2024-10-15 12:52:44.783279] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30260: invalid model number 'uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W,<_d=C$dTB' 00:12:24.614 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:24.614 { 00:12:24.614 "nqn": "nqn.2016-06.io.spdk:cnode30260", 00:12:24.614 "model_number": "uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W\u007f,<_d=C$dTB", 00:12:24.614 "method": "nvmf_create_subsystem", 00:12:24.614 "req_id": 1 00:12:24.614 } 00:12:24.614 Got JSON-RPC error response 00:12:24.614 response: 00:12:24.614 { 00:12:24.614 "code": -32602, 00:12:24.614 "message": "Invalid MN uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W\u007f,<_d=C$dTB" 00:12:24.614 }' 00:12:24.614 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:24.614 { 00:12:24.614 "nqn": "nqn.2016-06.io.spdk:cnode30260", 00:12:24.614 "model_number": "uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W\u007f,<_d=C$dTB", 00:12:24.614 "method": "nvmf_create_subsystem", 00:12:24.614 "req_id": 1 00:12:24.614 } 00:12:24.614 Got JSON-RPC error response 00:12:24.614 response: 00:12:24.614 { 00:12:24.614 "code": -32602, 00:12:24.614 "message": "Invalid MN uh{U.}ZT@jf@Yd|g{p0 qpttw{ma9W\u007f,<_d=C$dTB" 00:12:24.614 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:24.614 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:24.872 [2024-10-15 12:52:44.983998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.872 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:25.131 [2024-10-15 12:52:45.389313] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:25.131 { 00:12:25.131 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:25.131 "listen_address": { 00:12:25.131 "trtype": "tcp", 00:12:25.131 "traddr": "", 00:12:25.131 "trsvcid": "4421" 00:12:25.131 }, 00:12:25.131 "method": "nvmf_subsystem_remove_listener", 00:12:25.131 "req_id": 1 00:12:25.131 } 00:12:25.131 Got JSON-RPC error response 00:12:25.131 response: 00:12:25.131 { 00:12:25.131 "code": -32602, 00:12:25.131 "message": "Invalid parameters" 00:12:25.131 }' 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:25.131 { 00:12:25.131 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:25.131 "listen_address": { 00:12:25.131 "trtype": "tcp", 00:12:25.131 "traddr": "", 00:12:25.131 "trsvcid": "4421" 00:12:25.131 }, 00:12:25.131 "method": "nvmf_subsystem_remove_listener", 00:12:25.131 "req_id": 1 00:12:25.131 } 00:12:25.131 Got JSON-RPC error response 00:12:25.131 response: 00:12:25.131 { 00:12:25.131 "code": -32602, 00:12:25.131 "message": "Invalid parameters" 00:12:25.131 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:25.131 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16394 -i 0 00:12:25.389 [2024-10-15 12:52:45.605975] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16394: invalid cntlid range [0-65519] 00:12:25.389 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:25.389 { 00:12:25.389 "nqn": "nqn.2016-06.io.spdk:cnode16394", 00:12:25.389 "min_cntlid": 0, 00:12:25.389 "method": "nvmf_create_subsystem", 00:12:25.389 "req_id": 1 00:12:25.389 } 00:12:25.389 Got JSON-RPC error response 00:12:25.389 response: 00:12:25.389 { 00:12:25.389 "code": -32602, 00:12:25.389 "message": "Invalid cntlid range [0-65519]" 00:12:25.389 }' 00:12:25.389 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:25.389 { 00:12:25.389 "nqn": "nqn.2016-06.io.spdk:cnode16394", 00:12:25.389 "min_cntlid": 0, 00:12:25.389 "method": "nvmf_create_subsystem", 00:12:25.389 "req_id": 1 00:12:25.389 } 00:12:25.389 Got JSON-RPC error response 00:12:25.389 response: 00:12:25.389 { 00:12:25.389 "code": -32602, 00:12:25.389 "message": "Invalid cntlid range [0-65519]" 00:12:25.389 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.389 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15154 -i 65520 00:12:25.647 [2024-10-15 12:52:45.818696] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15154: invalid cntlid range [65520-65519] 00:12:25.647 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:25.647 { 00:12:25.647 "nqn": "nqn.2016-06.io.spdk:cnode15154", 00:12:25.647 "min_cntlid": 65520, 00:12:25.647 "method": "nvmf_create_subsystem", 00:12:25.647 "req_id": 1 00:12:25.647 } 00:12:25.647 Got JSON-RPC error response 00:12:25.647 response: 00:12:25.647 { 00:12:25.647 "code": -32602, 00:12:25.648 "message": "Invalid cntlid range [65520-65519]" 00:12:25.648 }' 00:12:25.648 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:25.648 { 00:12:25.648 "nqn": "nqn.2016-06.io.spdk:cnode15154", 00:12:25.648 "min_cntlid": 65520, 00:12:25.648 "method": "nvmf_create_subsystem", 00:12:25.648 "req_id": 1 00:12:25.648 } 00:12:25.648 Got JSON-RPC error response 00:12:25.648 response: 00:12:25.648 { 00:12:25.648 "code": -32602, 00:12:25.648 "message": "Invalid cntlid range [65520-65519]" 00:12:25.648 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.648 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10792 -I 0 00:12:25.906 [2024-10-15 12:52:46.011359] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10792: invalid cntlid range [1-0] 00:12:25.906 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:25.906 { 00:12:25.906 "nqn": "nqn.2016-06.io.spdk:cnode10792", 00:12:25.906 "max_cntlid": 0, 00:12:25.906 "method": "nvmf_create_subsystem", 00:12:25.906 "req_id": 1 00:12:25.906 } 00:12:25.906 Got JSON-RPC error response 00:12:25.906 response: 00:12:25.906 { 00:12:25.906 "code": -32602, 00:12:25.906 "message": "Invalid cntlid range [1-0]" 00:12:25.906 }' 00:12:25.906 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:25.906 { 00:12:25.906 "nqn": "nqn.2016-06.io.spdk:cnode10792", 00:12:25.906 "max_cntlid": 0, 00:12:25.906 "method": "nvmf_create_subsystem", 00:12:25.906 "req_id": 1 00:12:25.906 } 00:12:25.906 Got JSON-RPC error response 00:12:25.906 response: 00:12:25.906 { 00:12:25.906 "code": -32602, 00:12:25.906 "message": "Invalid cntlid range [1-0]" 00:12:25.906 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.906 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7592 -I 65520 00:12:25.906 [2024-10-15 12:52:46.203995] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7592: invalid cntlid range [1-65520] 00:12:26.164 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:26.164 { 00:12:26.164 "nqn": "nqn.2016-06.io.spdk:cnode7592", 00:12:26.164 "max_cntlid": 65520, 00:12:26.164 "method": "nvmf_create_subsystem", 00:12:26.164 "req_id": 1 00:12:26.164 } 00:12:26.164 Got JSON-RPC error response 00:12:26.164 response: 00:12:26.164 { 00:12:26.164 "code": -32602, 00:12:26.164 "message": "Invalid cntlid range [1-65520]" 00:12:26.164 }' 00:12:26.164 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:26.164 { 00:12:26.164 "nqn": "nqn.2016-06.io.spdk:cnode7592", 00:12:26.164 "max_cntlid": 65520, 00:12:26.164 "method": "nvmf_create_subsystem", 00:12:26.164 "req_id": 1 00:12:26.164 } 00:12:26.164 Got JSON-RPC error response 00:12:26.164 response: 00:12:26.164 { 00:12:26.164 "code": -32602, 00:12:26.164 "message": "Invalid cntlid range [1-65520]" 00:12:26.164 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:26.164 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26435 -i 6 -I 5 00:12:26.164 [2024-10-15 12:52:46.400673] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26435: invalid cntlid range [6-5] 00:12:26.164 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:26.164 { 00:12:26.164 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:12:26.164 "min_cntlid": 6, 00:12:26.164 "max_cntlid": 5, 00:12:26.164 "method": "nvmf_create_subsystem", 00:12:26.164 "req_id": 1 00:12:26.164 } 00:12:26.164 Got JSON-RPC error response 00:12:26.164 response: 00:12:26.164 { 00:12:26.164 "code": -32602, 00:12:26.164 "message": "Invalid cntlid range [6-5]" 00:12:26.164 }' 00:12:26.164 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:26.164 { 00:12:26.164 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:12:26.164 "min_cntlid": 6, 00:12:26.165 "max_cntlid": 5, 00:12:26.165 "method": "nvmf_create_subsystem", 00:12:26.165 "req_id": 1 00:12:26.165 } 00:12:26.165 Got JSON-RPC error response 00:12:26.165 response: 00:12:26.165 { 00:12:26.165 "code": -32602, 00:12:26.165 "message": "Invalid cntlid range [6-5]" 00:12:26.165 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:26.165 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:26.424 { 00:12:26.424 "name": "foobar", 00:12:26.424 "method": "nvmf_delete_target", 00:12:26.424 "req_id": 1 00:12:26.424 } 00:12:26.424 Got JSON-RPC error response 00:12:26.424 response: 00:12:26.424 { 00:12:26.424 "code": -32602, 00:12:26.424 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:26.424 }' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:26.424 { 00:12:26.424 "name": "foobar", 00:12:26.424 "method": "nvmf_delete_target", 00:12:26.424 "req_id": 1 00:12:26.424 } 00:12:26.424 Got JSON-RPC error response 00:12:26.424 response: 00:12:26.424 { 00:12:26.424 "code": -32602, 00:12:26.424 "message": "The specified target doesn't exist, cannot delete it." 00:12:26.424 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.424 rmmod nvme_tcp 00:12:26.424 rmmod nvme_fabrics 00:12:26.424 rmmod nvme_keyring 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1158208 ']' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1158208 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1158208 ']' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1158208 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1158208 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1158208' 00:12:26.424 killing process with pid 1158208 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1158208 00:12:26.424 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1158208 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.683 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.590 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.590 00:12:28.590 real 0m12.023s 00:12:28.590 user 0m18.476s 00:12:28.590 sys 0m5.436s 00:12:28.590 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.590 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:28.590 ************************************ 00:12:28.590 END TEST nvmf_invalid 00:12:28.590 ************************************ 00:12:28.849 12:52:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:28.850 12:52:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.850 12:52:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.850 12:52:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.850 ************************************ 00:12:28.850 START TEST nvmf_connect_stress 00:12:28.850 ************************************ 00:12:28.850 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:28.850 * Looking for test storage... 00:12:28.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:28.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.850 --rc genhtml_branch_coverage=1 00:12:28.850 --rc genhtml_function_coverage=1 00:12:28.850 --rc genhtml_legend=1 00:12:28.850 --rc geninfo_all_blocks=1 00:12:28.850 --rc geninfo_unexecuted_blocks=1 00:12:28.850 00:12:28.850 ' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:28.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.850 --rc genhtml_branch_coverage=1 00:12:28.850 --rc genhtml_function_coverage=1 00:12:28.850 --rc genhtml_legend=1 00:12:28.850 --rc geninfo_all_blocks=1 00:12:28.850 --rc geninfo_unexecuted_blocks=1 00:12:28.850 00:12:28.850 ' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:28.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.850 --rc genhtml_branch_coverage=1 00:12:28.850 --rc genhtml_function_coverage=1 00:12:28.850 --rc genhtml_legend=1 00:12:28.850 --rc geninfo_all_blocks=1 00:12:28.850 --rc geninfo_unexecuted_blocks=1 00:12:28.850 00:12:28.850 ' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:28.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.850 --rc genhtml_branch_coverage=1 00:12:28.850 --rc genhtml_function_coverage=1 00:12:28.850 --rc genhtml_legend=1 00:12:28.850 --rc geninfo_all_blocks=1 00:12:28.850 --rc geninfo_unexecuted_blocks=1 00:12:28.850 00:12:28.850 ' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.850 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.851 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:35.426 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:35.427 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:35.427 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:35.427 Found net devices under 0000:86:00.0: cvl_0_0 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:35.427 Found net devices under 0000:86:00.1: cvl_0_1 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.427 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:12:35.427 00:12:35.427 --- 10.0.0.2 ping statistics --- 00:12:35.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.427 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:12:35.427 00:12:35.427 --- 10.0.0.1 ping statistics --- 00:12:35.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.427 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:35.427 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1162382 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1162382 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1162382 ']' 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 [2024-10-15 12:52:55.229532] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:12:35.428 [2024-10-15 12:52:55.229579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.428 [2024-10-15 12:52:55.302607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.428 [2024-10-15 12:52:55.344327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.428 [2024-10-15 12:52:55.344361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.428 [2024-10-15 12:52:55.344368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.428 [2024-10-15 12:52:55.344375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.428 [2024-10-15 12:52:55.344380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.428 [2024-10-15 12:52:55.345785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.428 [2024-10-15 12:52:55.345892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.428 [2024-10-15 12:52:55.345893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 [2024-10-15 12:52:55.486058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 [2024-10-15 12:52:55.506267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.428 NULL1 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1162496 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.428 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.716 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.716 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:35.716 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.716 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.716 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.974 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.974 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:35.974 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.974 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.974 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.542 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.542 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:36.542 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.542 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.542 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.800 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.800 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:36.800 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.800 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.800 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.059 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.059 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:37.059 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.059 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.059 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.318 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.318 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:37.318 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.318 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.318 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.577 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.577 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:37.577 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.577 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.577 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.144 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.144 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:38.144 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.144 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.144 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.407 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.407 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:38.407 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.407 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.407 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.666 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.666 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:38.666 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.666 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.666 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.924 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.924 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:38.924 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.924 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.924 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.182 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.182 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:39.182 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.182 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.441 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.700 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:39.700 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.700 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.700 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.959 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.959 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:39.959 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.959 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.959 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.218 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.218 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:40.218 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.218 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.218 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.785 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.785 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:40.785 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.785 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.785 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.044 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.044 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:41.044 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.044 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.044 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.302 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.302 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:41.302 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.302 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.302 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.561 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.561 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:41.561 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.561 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.561 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.820 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.820 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:41.820 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.820 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.820 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.388 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.388 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:42.388 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.388 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.388 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.646 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:42.646 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.646 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.646 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.905 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.905 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:42.905 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.905 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.905 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.164 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.164 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:43.164 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.164 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.164 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.422 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.422 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:43.422 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.422 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.422 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.990 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:43.990 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.990 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.990 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.249 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.249 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:44.249 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.249 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.249 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.508 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.508 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:44.508 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.509 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.509 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.768 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:44.768 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.768 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.768 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.336 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.336 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:45.336 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.336 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.336 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.336 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1162496 00:12:45.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1162496) - No such process 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1162496 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.595 rmmod nvme_tcp 00:12:45.595 rmmod nvme_fabrics 00:12:45.595 rmmod nvme_keyring 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1162382 ']' 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1162382 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1162382 ']' 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1162382 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1162382 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1162382' 00:12:45.595 killing process with pid 1162382 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1162382 00:12:45.595 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1162382 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.854 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.758 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.758 00:12:47.758 real 0m19.098s 00:12:47.758 user 0m39.400s 00:12:47.758 sys 0m8.524s 00:12:47.758 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.758 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.758 ************************************ 00:12:47.758 END TEST nvmf_connect_stress 00:12:47.758 ************************************ 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.018 ************************************ 00:12:48.018 START TEST nvmf_fused_ordering 00:12:48.018 ************************************ 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:48.018 * Looking for test storage... 00:12:48.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:48.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.018 --rc genhtml_branch_coverage=1 00:12:48.018 --rc genhtml_function_coverage=1 00:12:48.018 --rc genhtml_legend=1 00:12:48.018 --rc geninfo_all_blocks=1 00:12:48.018 --rc geninfo_unexecuted_blocks=1 00:12:48.018 00:12:48.018 ' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:48.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.018 --rc genhtml_branch_coverage=1 00:12:48.018 --rc genhtml_function_coverage=1 00:12:48.018 --rc genhtml_legend=1 00:12:48.018 --rc geninfo_all_blocks=1 00:12:48.018 --rc geninfo_unexecuted_blocks=1 00:12:48.018 00:12:48.018 ' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:48.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.018 --rc genhtml_branch_coverage=1 00:12:48.018 --rc genhtml_function_coverage=1 00:12:48.018 --rc genhtml_legend=1 00:12:48.018 --rc geninfo_all_blocks=1 00:12:48.018 --rc geninfo_unexecuted_blocks=1 00:12:48.018 00:12:48.018 ' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:48.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.018 --rc genhtml_branch_coverage=1 00:12:48.018 --rc genhtml_function_coverage=1 00:12:48.018 --rc genhtml_legend=1 00:12:48.018 --rc geninfo_all_blocks=1 00:12:48.018 --rc geninfo_unexecuted_blocks=1 00:12:48.018 00:12:48.018 ' 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.018 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.019 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:48.278 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.279 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.934 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.934 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.934 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:12:54.935 00:12:54.935 --- 10.0.0.2 ping statistics --- 00:12:54.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.935 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:12:54.935 00:12:54.935 --- 10.0.0.1 ping statistics --- 00:12:54.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.935 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1167776 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1167776 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1167776 ']' 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 [2024-10-15 12:53:14.389102] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:12:54.935 [2024-10-15 12:53:14.389144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.935 [2024-10-15 12:53:14.457642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.935 [2024-10-15 12:53:14.498197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.935 [2024-10-15 12:53:14.498232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.935 [2024-10-15 12:53:14.498239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.935 [2024-10-15 12:53:14.498245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.935 [2024-10-15 12:53:14.498250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.935 [2024-10-15 12:53:14.498841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 [2024-10-15 12:53:14.633268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 [2024-10-15 12:53:14.653469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 NULL1 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:54.935 [2024-10-15 12:53:14.709483] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:12:54.935 [2024-10-15 12:53:14.709528] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167802 ] 00:12:54.935 Attached to nqn.2016-06.io.spdk:cnode1 00:12:54.935 Namespace ID: 1 size: 1GB 00:12:54.935 fused_ordering(0) 00:12:54.935 fused_ordering(1) 00:12:54.935 fused_ordering(2) 00:12:54.935 fused_ordering(3) 00:12:54.935 fused_ordering(4) 00:12:54.935 fused_ordering(5) 00:12:54.935 fused_ordering(6) 00:12:54.935 fused_ordering(7) 00:12:54.935 fused_ordering(8) 00:12:54.935 fused_ordering(9) 00:12:54.935 fused_ordering(10) 00:12:54.935 fused_ordering(11) 00:12:54.935 fused_ordering(12) 00:12:54.935 fused_ordering(13) 00:12:54.935 fused_ordering(14) 00:12:54.935 fused_ordering(15) 00:12:54.935 fused_ordering(16) 00:12:54.935 fused_ordering(17) 00:12:54.935 fused_ordering(18) 00:12:54.935 fused_ordering(19) 00:12:54.935 fused_ordering(20) 00:12:54.935 fused_ordering(21) 00:12:54.935 fused_ordering(22) 00:12:54.935 fused_ordering(23) 00:12:54.935 fused_ordering(24) 00:12:54.935 fused_ordering(25) 00:12:54.935 fused_ordering(26) 00:12:54.935 fused_ordering(27) 00:12:54.935 fused_ordering(28) 00:12:54.935 fused_ordering(29) 00:12:54.935 fused_ordering(30) 00:12:54.935 fused_ordering(31) 00:12:54.935 fused_ordering(32) 00:12:54.935 fused_ordering(33) 00:12:54.935 fused_ordering(34) 00:12:54.935 fused_ordering(35) 00:12:54.935 fused_ordering(36) 00:12:54.935 fused_ordering(37) 00:12:54.935 fused_ordering(38) 00:12:54.935 fused_ordering(39) 00:12:54.935 fused_ordering(40) 00:12:54.935 fused_ordering(41) 00:12:54.935 fused_ordering(42) 00:12:54.935 fused_ordering(43) 00:12:54.935 fused_ordering(44) 00:12:54.935 fused_ordering(45) 00:12:54.935 fused_ordering(46) 00:12:54.935 fused_ordering(47) 00:12:54.935 fused_ordering(48) 00:12:54.935 fused_ordering(49) 00:12:54.935 fused_ordering(50) 00:12:54.935 fused_ordering(51) 00:12:54.935 fused_ordering(52) 00:12:54.935 fused_ordering(53) 00:12:54.935 fused_ordering(54) 00:12:54.935 fused_ordering(55) 00:12:54.935 fused_ordering(56) 00:12:54.935 fused_ordering(57) 00:12:54.935 fused_ordering(58) 00:12:54.935 fused_ordering(59) 00:12:54.936 fused_ordering(60) 00:12:54.936 fused_ordering(61) 00:12:54.936 fused_ordering(62) 00:12:54.936 fused_ordering(63) 00:12:54.936 fused_ordering(64) 00:12:54.936 fused_ordering(65) 00:12:54.936 fused_ordering(66) 00:12:54.936 fused_ordering(67) 00:12:54.936 fused_ordering(68) 00:12:54.936 fused_ordering(69) 00:12:54.936 fused_ordering(70) 00:12:54.936 fused_ordering(71) 00:12:54.936 fused_ordering(72) 00:12:54.936 fused_ordering(73) 00:12:54.936 fused_ordering(74) 00:12:54.936 fused_ordering(75) 00:12:54.936 fused_ordering(76) 00:12:54.936 fused_ordering(77) 00:12:54.936 fused_ordering(78) 00:12:54.936 fused_ordering(79) 00:12:54.936 fused_ordering(80) 00:12:54.936 fused_ordering(81) 00:12:54.936 fused_ordering(82) 00:12:54.936 fused_ordering(83) 00:12:54.936 fused_ordering(84) 00:12:54.936 fused_ordering(85) 00:12:54.936 fused_ordering(86) 00:12:54.936 fused_ordering(87) 00:12:54.936 fused_ordering(88) 00:12:54.936 fused_ordering(89) 00:12:54.936 fused_ordering(90) 00:12:54.936 fused_ordering(91) 00:12:54.936 fused_ordering(92) 00:12:54.936 fused_ordering(93) 00:12:54.936 fused_ordering(94) 00:12:54.936 fused_ordering(95) 00:12:54.936 fused_ordering(96) 00:12:54.936 fused_ordering(97) 00:12:54.936 fused_ordering(98) 00:12:54.936 fused_ordering(99) 00:12:54.936 fused_ordering(100) 00:12:54.936 fused_ordering(101) 00:12:54.936 fused_ordering(102) 00:12:54.936 fused_ordering(103) 00:12:54.936 fused_ordering(104) 00:12:54.936 fused_ordering(105) 00:12:54.936 fused_ordering(106) 00:12:54.936 fused_ordering(107) 00:12:54.936 fused_ordering(108) 00:12:54.936 fused_ordering(109) 00:12:54.936 fused_ordering(110) 00:12:54.936 fused_ordering(111) 00:12:54.936 fused_ordering(112) 00:12:54.936 fused_ordering(113) 00:12:54.936 fused_ordering(114) 00:12:54.936 fused_ordering(115) 00:12:54.936 fused_ordering(116) 00:12:54.936 fused_ordering(117) 00:12:54.936 fused_ordering(118) 00:12:54.936 fused_ordering(119) 00:12:54.936 fused_ordering(120) 00:12:54.936 fused_ordering(121) 00:12:54.936 fused_ordering(122) 00:12:54.936 fused_ordering(123) 00:12:54.936 fused_ordering(124) 00:12:54.936 fused_ordering(125) 00:12:54.936 fused_ordering(126) 00:12:54.936 fused_ordering(127) 00:12:54.936 fused_ordering(128) 00:12:54.936 fused_ordering(129) 00:12:54.936 fused_ordering(130) 00:12:54.936 fused_ordering(131) 00:12:54.936 fused_ordering(132) 00:12:54.936 fused_ordering(133) 00:12:54.936 fused_ordering(134) 00:12:54.936 fused_ordering(135) 00:12:54.936 fused_ordering(136) 00:12:54.936 fused_ordering(137) 00:12:54.936 fused_ordering(138) 00:12:54.936 fused_ordering(139) 00:12:54.936 fused_ordering(140) 00:12:54.936 fused_ordering(141) 00:12:54.936 fused_ordering(142) 00:12:54.936 fused_ordering(143) 00:12:54.936 fused_ordering(144) 00:12:54.936 fused_ordering(145) 00:12:54.936 fused_ordering(146) 00:12:54.936 fused_ordering(147) 00:12:54.936 fused_ordering(148) 00:12:54.936 fused_ordering(149) 00:12:54.936 fused_ordering(150) 00:12:54.936 fused_ordering(151) 00:12:54.936 fused_ordering(152) 00:12:54.936 fused_ordering(153) 00:12:54.936 fused_ordering(154) 00:12:54.936 fused_ordering(155) 00:12:54.936 fused_ordering(156) 00:12:54.936 fused_ordering(157) 00:12:54.936 fused_ordering(158) 00:12:54.936 fused_ordering(159) 00:12:54.936 fused_ordering(160) 00:12:54.936 fused_ordering(161) 00:12:54.936 fused_ordering(162) 00:12:54.936 fused_ordering(163) 00:12:54.936 fused_ordering(164) 00:12:54.936 fused_ordering(165) 00:12:54.936 fused_ordering(166) 00:12:54.936 fused_ordering(167) 00:12:54.936 fused_ordering(168) 00:12:54.936 fused_ordering(169) 00:12:54.936 fused_ordering(170) 00:12:54.936 fused_ordering(171) 00:12:54.936 fused_ordering(172) 00:12:54.936 fused_ordering(173) 00:12:54.936 fused_ordering(174) 00:12:54.936 fused_ordering(175) 00:12:54.936 fused_ordering(176) 00:12:54.936 fused_ordering(177) 00:12:54.936 fused_ordering(178) 00:12:54.936 fused_ordering(179) 00:12:54.936 fused_ordering(180) 00:12:54.936 fused_ordering(181) 00:12:54.936 fused_ordering(182) 00:12:54.936 fused_ordering(183) 00:12:54.936 fused_ordering(184) 00:12:54.936 fused_ordering(185) 00:12:54.936 fused_ordering(186) 00:12:54.936 fused_ordering(187) 00:12:54.936 fused_ordering(188) 00:12:54.936 fused_ordering(189) 00:12:54.936 fused_ordering(190) 00:12:54.936 fused_ordering(191) 00:12:54.936 fused_ordering(192) 00:12:54.936 fused_ordering(193) 00:12:54.936 fused_ordering(194) 00:12:54.936 fused_ordering(195) 00:12:54.936 fused_ordering(196) 00:12:54.936 fused_ordering(197) 00:12:54.936 fused_ordering(198) 00:12:54.936 fused_ordering(199) 00:12:54.936 fused_ordering(200) 00:12:54.936 fused_ordering(201) 00:12:54.936 fused_ordering(202) 00:12:54.936 fused_ordering(203) 00:12:54.936 fused_ordering(204) 00:12:54.936 fused_ordering(205) 00:12:54.936 fused_ordering(206) 00:12:54.936 fused_ordering(207) 00:12:54.936 fused_ordering(208) 00:12:54.936 fused_ordering(209) 00:12:54.936 fused_ordering(210) 00:12:54.936 fused_ordering(211) 00:12:54.936 fused_ordering(212) 00:12:54.936 fused_ordering(213) 00:12:54.936 fused_ordering(214) 00:12:54.936 fused_ordering(215) 00:12:54.936 fused_ordering(216) 00:12:54.936 fused_ordering(217) 00:12:54.936 fused_ordering(218) 00:12:54.936 fused_ordering(219) 00:12:54.936 fused_ordering(220) 00:12:54.936 fused_ordering(221) 00:12:54.936 fused_ordering(222) 00:12:54.936 fused_ordering(223) 00:12:54.936 fused_ordering(224) 00:12:54.936 fused_ordering(225) 00:12:54.936 fused_ordering(226) 00:12:54.936 fused_ordering(227) 00:12:54.936 fused_ordering(228) 00:12:54.936 fused_ordering(229) 00:12:54.936 fused_ordering(230) 00:12:54.936 fused_ordering(231) 00:12:54.936 fused_ordering(232) 00:12:54.936 fused_ordering(233) 00:12:54.936 fused_ordering(234) 00:12:54.936 fused_ordering(235) 00:12:54.936 fused_ordering(236) 00:12:54.936 fused_ordering(237) 00:12:54.936 fused_ordering(238) 00:12:54.936 fused_ordering(239) 00:12:54.936 fused_ordering(240) 00:12:54.936 fused_ordering(241) 00:12:54.936 fused_ordering(242) 00:12:54.936 fused_ordering(243) 00:12:54.936 fused_ordering(244) 00:12:54.936 fused_ordering(245) 00:12:54.936 fused_ordering(246) 00:12:54.936 fused_ordering(247) 00:12:54.936 fused_ordering(248) 00:12:54.936 fused_ordering(249) 00:12:54.936 fused_ordering(250) 00:12:54.936 fused_ordering(251) 00:12:54.936 fused_ordering(252) 00:12:54.936 fused_ordering(253) 00:12:54.936 fused_ordering(254) 00:12:54.936 fused_ordering(255) 00:12:54.936 fused_ordering(256) 00:12:54.936 fused_ordering(257) 00:12:54.936 fused_ordering(258) 00:12:54.936 fused_ordering(259) 00:12:54.936 fused_ordering(260) 00:12:54.936 fused_ordering(261) 00:12:54.936 fused_ordering(262) 00:12:54.936 fused_ordering(263) 00:12:54.936 fused_ordering(264) 00:12:54.936 fused_ordering(265) 00:12:54.936 fused_ordering(266) 00:12:54.936 fused_ordering(267) 00:12:54.936 fused_ordering(268) 00:12:54.936 fused_ordering(269) 00:12:54.936 fused_ordering(270) 00:12:54.936 fused_ordering(271) 00:12:54.936 fused_ordering(272) 00:12:54.936 fused_ordering(273) 00:12:54.936 fused_ordering(274) 00:12:54.936 fused_ordering(275) 00:12:54.936 fused_ordering(276) 00:12:54.936 fused_ordering(277) 00:12:54.936 fused_ordering(278) 00:12:54.936 fused_ordering(279) 00:12:54.936 fused_ordering(280) 00:12:54.936 fused_ordering(281) 00:12:54.936 fused_ordering(282) 00:12:54.936 fused_ordering(283) 00:12:54.936 fused_ordering(284) 00:12:54.936 fused_ordering(285) 00:12:54.936 fused_ordering(286) 00:12:54.936 fused_ordering(287) 00:12:54.936 fused_ordering(288) 00:12:54.936 fused_ordering(289) 00:12:54.936 fused_ordering(290) 00:12:54.936 fused_ordering(291) 00:12:54.936 fused_ordering(292) 00:12:54.936 fused_ordering(293) 00:12:54.936 fused_ordering(294) 00:12:54.936 fused_ordering(295) 00:12:54.936 fused_ordering(296) 00:12:54.936 fused_ordering(297) 00:12:54.936 fused_ordering(298) 00:12:54.936 fused_ordering(299) 00:12:54.936 fused_ordering(300) 00:12:54.936 fused_ordering(301) 00:12:54.936 fused_ordering(302) 00:12:54.936 fused_ordering(303) 00:12:54.936 fused_ordering(304) 00:12:54.936 fused_ordering(305) 00:12:54.936 fused_ordering(306) 00:12:54.936 fused_ordering(307) 00:12:54.936 fused_ordering(308) 00:12:54.936 fused_ordering(309) 00:12:54.936 fused_ordering(310) 00:12:54.936 fused_ordering(311) 00:12:54.936 fused_ordering(312) 00:12:54.936 fused_ordering(313) 00:12:54.936 fused_ordering(314) 00:12:54.936 fused_ordering(315) 00:12:54.936 fused_ordering(316) 00:12:54.936 fused_ordering(317) 00:12:54.936 fused_ordering(318) 00:12:54.936 fused_ordering(319) 00:12:54.936 fused_ordering(320) 00:12:54.936 fused_ordering(321) 00:12:54.936 fused_ordering(322) 00:12:54.936 fused_ordering(323) 00:12:54.936 fused_ordering(324) 00:12:54.936 fused_ordering(325) 00:12:54.936 fused_ordering(326) 00:12:54.936 fused_ordering(327) 00:12:54.936 fused_ordering(328) 00:12:54.936 fused_ordering(329) 00:12:54.936 fused_ordering(330) 00:12:54.936 fused_ordering(331) 00:12:54.936 fused_ordering(332) 00:12:54.936 fused_ordering(333) 00:12:54.936 fused_ordering(334) 00:12:54.936 fused_ordering(335) 00:12:54.936 fused_ordering(336) 00:12:54.936 fused_ordering(337) 00:12:54.936 fused_ordering(338) 00:12:54.936 fused_ordering(339) 00:12:54.936 fused_ordering(340) 00:12:54.936 fused_ordering(341) 00:12:54.936 fused_ordering(342) 00:12:54.936 fused_ordering(343) 00:12:54.936 fused_ordering(344) 00:12:54.936 fused_ordering(345) 00:12:54.936 fused_ordering(346) 00:12:54.936 fused_ordering(347) 00:12:54.936 fused_ordering(348) 00:12:54.936 fused_ordering(349) 00:12:54.936 fused_ordering(350) 00:12:54.936 fused_ordering(351) 00:12:54.936 fused_ordering(352) 00:12:54.936 fused_ordering(353) 00:12:54.937 fused_ordering(354) 00:12:54.937 fused_ordering(355) 00:12:54.937 fused_ordering(356) 00:12:54.937 fused_ordering(357) 00:12:54.937 fused_ordering(358) 00:12:54.937 fused_ordering(359) 00:12:54.937 fused_ordering(360) 00:12:54.937 fused_ordering(361) 00:12:54.937 fused_ordering(362) 00:12:54.937 fused_ordering(363) 00:12:54.937 fused_ordering(364) 00:12:54.937 fused_ordering(365) 00:12:54.937 fused_ordering(366) 00:12:54.937 fused_ordering(367) 00:12:54.937 fused_ordering(368) 00:12:54.937 fused_ordering(369) 00:12:54.937 fused_ordering(370) 00:12:54.937 fused_ordering(371) 00:12:54.937 fused_ordering(372) 00:12:54.937 fused_ordering(373) 00:12:54.937 fused_ordering(374) 00:12:54.937 fused_ordering(375) 00:12:54.937 fused_ordering(376) 00:12:54.937 fused_ordering(377) 00:12:54.937 fused_ordering(378) 00:12:54.937 fused_ordering(379) 00:12:54.937 fused_ordering(380) 00:12:54.937 fused_ordering(381) 00:12:54.937 fused_ordering(382) 00:12:54.937 fused_ordering(383) 00:12:54.937 fused_ordering(384) 00:12:54.937 fused_ordering(385) 00:12:54.937 fused_ordering(386) 00:12:54.937 fused_ordering(387) 00:12:54.937 fused_ordering(388) 00:12:54.937 fused_ordering(389) 00:12:54.937 fused_ordering(390) 00:12:54.937 fused_ordering(391) 00:12:54.937 fused_ordering(392) 00:12:54.937 fused_ordering(393) 00:12:54.937 fused_ordering(394) 00:12:54.937 fused_ordering(395) 00:12:54.937 fused_ordering(396) 00:12:54.937 fused_ordering(397) 00:12:54.937 fused_ordering(398) 00:12:54.937 fused_ordering(399) 00:12:54.937 fused_ordering(400) 00:12:54.937 fused_ordering(401) 00:12:54.937 fused_ordering(402) 00:12:54.937 fused_ordering(403) 00:12:54.937 fused_ordering(404) 00:12:54.937 fused_ordering(405) 00:12:54.937 fused_ordering(406) 00:12:54.937 fused_ordering(407) 00:12:54.937 fused_ordering(408) 00:12:54.937 fused_ordering(409) 00:12:54.937 fused_ordering(410) 00:12:55.505 fused_ordering(411) 00:12:55.505 fused_ordering(412) 00:12:55.505 fused_ordering(413) 00:12:55.505 fused_ordering(414) 00:12:55.505 fused_ordering(415) 00:12:55.505 fused_ordering(416) 00:12:55.505 fused_ordering(417) 00:12:55.505 fused_ordering(418) 00:12:55.505 fused_ordering(419) 00:12:55.505 fused_ordering(420) 00:12:55.505 fused_ordering(421) 00:12:55.505 fused_ordering(422) 00:12:55.505 fused_ordering(423) 00:12:55.505 fused_ordering(424) 00:12:55.506 fused_ordering(425) 00:12:55.506 fused_ordering(426) 00:12:55.506 fused_ordering(427) 00:12:55.506 fused_ordering(428) 00:12:55.506 fused_ordering(429) 00:12:55.506 fused_ordering(430) 00:12:55.506 fused_ordering(431) 00:12:55.506 fused_ordering(432) 00:12:55.506 fused_ordering(433) 00:12:55.506 fused_ordering(434) 00:12:55.506 fused_ordering(435) 00:12:55.506 fused_ordering(436) 00:12:55.506 fused_ordering(437) 00:12:55.506 fused_ordering(438) 00:12:55.506 fused_ordering(439) 00:12:55.506 fused_ordering(440) 00:12:55.506 fused_ordering(441) 00:12:55.506 fused_ordering(442) 00:12:55.506 fused_ordering(443) 00:12:55.506 fused_ordering(444) 00:12:55.506 fused_ordering(445) 00:12:55.506 fused_ordering(446) 00:12:55.506 fused_ordering(447) 00:12:55.506 fused_ordering(448) 00:12:55.506 fused_ordering(449) 00:12:55.506 fused_ordering(450) 00:12:55.506 fused_ordering(451) 00:12:55.506 fused_ordering(452) 00:12:55.506 fused_ordering(453) 00:12:55.506 fused_ordering(454) 00:12:55.506 fused_ordering(455) 00:12:55.506 fused_ordering(456) 00:12:55.506 fused_ordering(457) 00:12:55.506 fused_ordering(458) 00:12:55.506 fused_ordering(459) 00:12:55.506 fused_ordering(460) 00:12:55.506 fused_ordering(461) 00:12:55.506 fused_ordering(462) 00:12:55.506 fused_ordering(463) 00:12:55.506 fused_ordering(464) 00:12:55.506 fused_ordering(465) 00:12:55.506 fused_ordering(466) 00:12:55.506 fused_ordering(467) 00:12:55.506 fused_ordering(468) 00:12:55.506 fused_ordering(469) 00:12:55.506 fused_ordering(470) 00:12:55.506 fused_ordering(471) 00:12:55.506 fused_ordering(472) 00:12:55.506 fused_ordering(473) 00:12:55.506 fused_ordering(474) 00:12:55.506 fused_ordering(475) 00:12:55.506 fused_ordering(476) 00:12:55.506 fused_ordering(477) 00:12:55.506 fused_ordering(478) 00:12:55.506 fused_ordering(479) 00:12:55.506 fused_ordering(480) 00:12:55.506 fused_ordering(481) 00:12:55.506 fused_ordering(482) 00:12:55.506 fused_ordering(483) 00:12:55.506 fused_ordering(484) 00:12:55.506 fused_ordering(485) 00:12:55.506 fused_ordering(486) 00:12:55.506 fused_ordering(487) 00:12:55.506 fused_ordering(488) 00:12:55.506 fused_ordering(489) 00:12:55.506 fused_ordering(490) 00:12:55.506 fused_ordering(491) 00:12:55.506 fused_ordering(492) 00:12:55.506 fused_ordering(493) 00:12:55.506 fused_ordering(494) 00:12:55.506 fused_ordering(495) 00:12:55.506 fused_ordering(496) 00:12:55.506 fused_ordering(497) 00:12:55.506 fused_ordering(498) 00:12:55.506 fused_ordering(499) 00:12:55.506 fused_ordering(500) 00:12:55.506 fused_ordering(501) 00:12:55.506 fused_ordering(502) 00:12:55.506 fused_ordering(503) 00:12:55.506 fused_ordering(504) 00:12:55.506 fused_ordering(505) 00:12:55.506 fused_ordering(506) 00:12:55.506 fused_ordering(507) 00:12:55.506 fused_ordering(508) 00:12:55.506 fused_ordering(509) 00:12:55.506 fused_ordering(510) 00:12:55.506 fused_ordering(511) 00:12:55.506 fused_ordering(512) 00:12:55.506 fused_ordering(513) 00:12:55.506 fused_ordering(514) 00:12:55.506 fused_ordering(515) 00:12:55.506 fused_ordering(516) 00:12:55.506 fused_ordering(517) 00:12:55.506 fused_ordering(518) 00:12:55.506 fused_ordering(519) 00:12:55.506 fused_ordering(520) 00:12:55.506 fused_ordering(521) 00:12:55.506 fused_ordering(522) 00:12:55.506 fused_ordering(523) 00:12:55.506 fused_ordering(524) 00:12:55.506 fused_ordering(525) 00:12:55.506 fused_ordering(526) 00:12:55.506 fused_ordering(527) 00:12:55.506 fused_ordering(528) 00:12:55.506 fused_ordering(529) 00:12:55.506 fused_ordering(530) 00:12:55.506 fused_ordering(531) 00:12:55.506 fused_ordering(532) 00:12:55.506 fused_ordering(533) 00:12:55.506 fused_ordering(534) 00:12:55.506 fused_ordering(535) 00:12:55.506 fused_ordering(536) 00:12:55.506 fused_ordering(537) 00:12:55.506 fused_ordering(538) 00:12:55.506 fused_ordering(539) 00:12:55.506 fused_ordering(540) 00:12:55.506 fused_ordering(541) 00:12:55.506 fused_ordering(542) 00:12:55.506 fused_ordering(543) 00:12:55.506 fused_ordering(544) 00:12:55.506 fused_ordering(545) 00:12:55.506 fused_ordering(546) 00:12:55.506 fused_ordering(547) 00:12:55.506 fused_ordering(548) 00:12:55.506 fused_ordering(549) 00:12:55.506 fused_ordering(550) 00:12:55.506 fused_ordering(551) 00:12:55.506 fused_ordering(552) 00:12:55.506 fused_ordering(553) 00:12:55.506 fused_ordering(554) 00:12:55.506 fused_ordering(555) 00:12:55.506 fused_ordering(556) 00:12:55.506 fused_ordering(557) 00:12:55.506 fused_ordering(558) 00:12:55.506 fused_ordering(559) 00:12:55.506 fused_ordering(560) 00:12:55.506 fused_ordering(561) 00:12:55.506 fused_ordering(562) 00:12:55.506 fused_ordering(563) 00:12:55.506 fused_ordering(564) 00:12:55.506 fused_ordering(565) 00:12:55.506 fused_ordering(566) 00:12:55.506 fused_ordering(567) 00:12:55.506 fused_ordering(568) 00:12:55.506 fused_ordering(569) 00:12:55.506 fused_ordering(570) 00:12:55.506 fused_ordering(571) 00:12:55.506 fused_ordering(572) 00:12:55.506 fused_ordering(573) 00:12:55.506 fused_ordering(574) 00:12:55.506 fused_ordering(575) 00:12:55.506 fused_ordering(576) 00:12:55.506 fused_ordering(577) 00:12:55.506 fused_ordering(578) 00:12:55.506 fused_ordering(579) 00:12:55.506 fused_ordering(580) 00:12:55.506 fused_ordering(581) 00:12:55.506 fused_ordering(582) 00:12:55.506 fused_ordering(583) 00:12:55.506 fused_ordering(584) 00:12:55.506 fused_ordering(585) 00:12:55.506 fused_ordering(586) 00:12:55.506 fused_ordering(587) 00:12:55.506 fused_ordering(588) 00:12:55.506 fused_ordering(589) 00:12:55.506 fused_ordering(590) 00:12:55.506 fused_ordering(591) 00:12:55.506 fused_ordering(592) 00:12:55.506 fused_ordering(593) 00:12:55.506 fused_ordering(594) 00:12:55.506 fused_ordering(595) 00:12:55.506 fused_ordering(596) 00:12:55.506 fused_ordering(597) 00:12:55.506 fused_ordering(598) 00:12:55.506 fused_ordering(599) 00:12:55.506 fused_ordering(600) 00:12:55.506 fused_ordering(601) 00:12:55.506 fused_ordering(602) 00:12:55.506 fused_ordering(603) 00:12:55.506 fused_ordering(604) 00:12:55.506 fused_ordering(605) 00:12:55.506 fused_ordering(606) 00:12:55.506 fused_ordering(607) 00:12:55.506 fused_ordering(608) 00:12:55.506 fused_ordering(609) 00:12:55.506 fused_ordering(610) 00:12:55.506 fused_ordering(611) 00:12:55.506 fused_ordering(612) 00:12:55.506 fused_ordering(613) 00:12:55.506 fused_ordering(614) 00:12:55.506 fused_ordering(615) 00:12:55.765 fused_ordering(616) 00:12:55.765 fused_ordering(617) 00:12:55.765 fused_ordering(618) 00:12:55.765 fused_ordering(619) 00:12:55.765 fused_ordering(620) 00:12:55.765 fused_ordering(621) 00:12:55.765 fused_ordering(622) 00:12:55.765 fused_ordering(623) 00:12:55.765 fused_ordering(624) 00:12:55.765 fused_ordering(625) 00:12:55.765 fused_ordering(626) 00:12:55.765 fused_ordering(627) 00:12:55.765 fused_ordering(628) 00:12:55.765 fused_ordering(629) 00:12:55.765 fused_ordering(630) 00:12:55.765 fused_ordering(631) 00:12:55.765 fused_ordering(632) 00:12:55.765 fused_ordering(633) 00:12:55.765 fused_ordering(634) 00:12:55.765 fused_ordering(635) 00:12:55.765 fused_ordering(636) 00:12:55.765 fused_ordering(637) 00:12:55.765 fused_ordering(638) 00:12:55.765 fused_ordering(639) 00:12:55.765 fused_ordering(640) 00:12:55.765 fused_ordering(641) 00:12:55.765 fused_ordering(642) 00:12:55.765 fused_ordering(643) 00:12:55.765 fused_ordering(644) 00:12:55.765 fused_ordering(645) 00:12:55.765 fused_ordering(646) 00:12:55.765 fused_ordering(647) 00:12:55.765 fused_ordering(648) 00:12:55.765 fused_ordering(649) 00:12:55.766 fused_ordering(650) 00:12:55.766 fused_ordering(651) 00:12:55.766 fused_ordering(652) 00:12:55.766 fused_ordering(653) 00:12:55.766 fused_ordering(654) 00:12:55.766 fused_ordering(655) 00:12:55.766 fused_ordering(656) 00:12:55.766 fused_ordering(657) 00:12:55.766 fused_ordering(658) 00:12:55.766 fused_ordering(659) 00:12:55.766 fused_ordering(660) 00:12:55.766 fused_ordering(661) 00:12:55.766 fused_ordering(662) 00:12:55.766 fused_ordering(663) 00:12:55.766 fused_ordering(664) 00:12:55.766 fused_ordering(665) 00:12:55.766 fused_ordering(666) 00:12:55.766 fused_ordering(667) 00:12:55.766 fused_ordering(668) 00:12:55.766 fused_ordering(669) 00:12:55.766 fused_ordering(670) 00:12:55.766 fused_ordering(671) 00:12:55.766 fused_ordering(672) 00:12:55.766 fused_ordering(673) 00:12:55.766 fused_ordering(674) 00:12:55.766 fused_ordering(675) 00:12:55.766 fused_ordering(676) 00:12:55.766 fused_ordering(677) 00:12:55.766 fused_ordering(678) 00:12:55.766 fused_ordering(679) 00:12:55.766 fused_ordering(680) 00:12:55.766 fused_ordering(681) 00:12:55.766 fused_ordering(682) 00:12:55.766 fused_ordering(683) 00:12:55.766 fused_ordering(684) 00:12:55.766 fused_ordering(685) 00:12:55.766 fused_ordering(686) 00:12:55.766 fused_ordering(687) 00:12:55.766 fused_ordering(688) 00:12:55.766 fused_ordering(689) 00:12:55.766 fused_ordering(690) 00:12:55.766 fused_ordering(691) 00:12:55.766 fused_ordering(692) 00:12:55.766 fused_ordering(693) 00:12:55.766 fused_ordering(694) 00:12:55.766 fused_ordering(695) 00:12:55.766 fused_ordering(696) 00:12:55.766 fused_ordering(697) 00:12:55.766 fused_ordering(698) 00:12:55.766 fused_ordering(699) 00:12:55.766 fused_ordering(700) 00:12:55.766 fused_ordering(701) 00:12:55.766 fused_ordering(702) 00:12:55.766 fused_ordering(703) 00:12:55.766 fused_ordering(704) 00:12:55.766 fused_ordering(705) 00:12:55.766 fused_ordering(706) 00:12:55.766 fused_ordering(707) 00:12:55.766 fused_ordering(708) 00:12:55.766 fused_ordering(709) 00:12:55.766 fused_ordering(710) 00:12:55.766 fused_ordering(711) 00:12:55.766 fused_ordering(712) 00:12:55.766 fused_ordering(713) 00:12:55.766 fused_ordering(714) 00:12:55.766 fused_ordering(715) 00:12:55.766 fused_ordering(716) 00:12:55.766 fused_ordering(717) 00:12:55.766 fused_ordering(718) 00:12:55.766 fused_ordering(719) 00:12:55.766 fused_ordering(720) 00:12:55.766 fused_ordering(721) 00:12:55.766 fused_ordering(722) 00:12:55.766 fused_ordering(723) 00:12:55.766 fused_ordering(724) 00:12:55.766 fused_ordering(725) 00:12:55.766 fused_ordering(726) 00:12:55.766 fused_ordering(727) 00:12:55.766 fused_ordering(728) 00:12:55.766 fused_ordering(729) 00:12:55.766 fused_ordering(730) 00:12:55.766 fused_ordering(731) 00:12:55.766 fused_ordering(732) 00:12:55.766 fused_ordering(733) 00:12:55.766 fused_ordering(734) 00:12:55.766 fused_ordering(735) 00:12:55.766 fused_ordering(736) 00:12:55.766 fused_ordering(737) 00:12:55.766 fused_ordering(738) 00:12:55.766 fused_ordering(739) 00:12:55.766 fused_ordering(740) 00:12:55.766 fused_ordering(741) 00:12:55.766 fused_ordering(742) 00:12:55.766 fused_ordering(743) 00:12:55.766 fused_ordering(744) 00:12:55.766 fused_ordering(745) 00:12:55.766 fused_ordering(746) 00:12:55.766 fused_ordering(747) 00:12:55.766 fused_ordering(748) 00:12:55.766 fused_ordering(749) 00:12:55.766 fused_ordering(750) 00:12:55.766 fused_ordering(751) 00:12:55.766 fused_ordering(752) 00:12:55.766 fused_ordering(753) 00:12:55.766 fused_ordering(754) 00:12:55.766 fused_ordering(755) 00:12:55.766 fused_ordering(756) 00:12:55.766 fused_ordering(757) 00:12:55.766 fused_ordering(758) 00:12:55.766 fused_ordering(759) 00:12:55.766 fused_ordering(760) 00:12:55.766 fused_ordering(761) 00:12:55.766 fused_ordering(762) 00:12:55.766 fused_ordering(763) 00:12:55.766 fused_ordering(764) 00:12:55.766 fused_ordering(765) 00:12:55.766 fused_ordering(766) 00:12:55.766 fused_ordering(767) 00:12:55.766 fused_ordering(768) 00:12:55.766 fused_ordering(769) 00:12:55.766 fused_ordering(770) 00:12:55.766 fused_ordering(771) 00:12:55.766 fused_ordering(772) 00:12:55.766 fused_ordering(773) 00:12:55.766 fused_ordering(774) 00:12:55.766 fused_ordering(775) 00:12:55.766 fused_ordering(776) 00:12:55.766 fused_ordering(777) 00:12:55.766 fused_ordering(778) 00:12:55.766 fused_ordering(779) 00:12:55.766 fused_ordering(780) 00:12:55.766 fused_ordering(781) 00:12:55.766 fused_ordering(782) 00:12:55.766 fused_ordering(783) 00:12:55.766 fused_ordering(784) 00:12:55.766 fused_ordering(785) 00:12:55.766 fused_ordering(786) 00:12:55.766 fused_ordering(787) 00:12:55.766 fused_ordering(788) 00:12:55.766 fused_ordering(789) 00:12:55.766 fused_ordering(790) 00:12:55.766 fused_ordering(791) 00:12:55.766 fused_ordering(792) 00:12:55.766 fused_ordering(793) 00:12:55.766 fused_ordering(794) 00:12:55.766 fused_ordering(795) 00:12:55.766 fused_ordering(796) 00:12:55.766 fused_ordering(797) 00:12:55.766 fused_ordering(798) 00:12:55.766 fused_ordering(799) 00:12:55.766 fused_ordering(800) 00:12:55.766 fused_ordering(801) 00:12:55.766 fused_ordering(802) 00:12:55.766 fused_ordering(803) 00:12:55.766 fused_ordering(804) 00:12:55.766 fused_ordering(805) 00:12:55.766 fused_ordering(806) 00:12:55.766 fused_ordering(807) 00:12:55.766 fused_ordering(808) 00:12:55.766 fused_ordering(809) 00:12:55.766 fused_ordering(810) 00:12:55.766 fused_ordering(811) 00:12:55.766 fused_ordering(812) 00:12:55.766 fused_ordering(813) 00:12:55.766 fused_ordering(814) 00:12:55.766 fused_ordering(815) 00:12:55.766 fused_ordering(816) 00:12:55.766 fused_ordering(817) 00:12:55.766 fused_ordering(818) 00:12:55.766 fused_ordering(819) 00:12:55.766 fused_ordering(820) 00:12:56.334 fused_o[2024-10-15 12:53:16.394280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b2b0 is same with the state(6) to be set 00:12:56.334 rdering(821) 00:12:56.334 fused_ordering(822) 00:12:56.334 fused_ordering(823) 00:12:56.334 fused_ordering(824) 00:12:56.334 fused_ordering(825) 00:12:56.334 fused_ordering(826) 00:12:56.334 fused_ordering(827) 00:12:56.334 fused_ordering(828) 00:12:56.334 fused_ordering(829) 00:12:56.334 fused_ordering(830) 00:12:56.334 fused_ordering(831) 00:12:56.334 fused_ordering(832) 00:12:56.334 fused_ordering(833) 00:12:56.334 fused_ordering(834) 00:12:56.334 fused_ordering(835) 00:12:56.334 fused_ordering(836) 00:12:56.334 fused_ordering(837) 00:12:56.334 fused_ordering(838) 00:12:56.334 fused_ordering(839) 00:12:56.334 fused_ordering(840) 00:12:56.334 fused_ordering(841) 00:12:56.334 fused_ordering(842) 00:12:56.334 fused_ordering(843) 00:12:56.334 fused_ordering(844) 00:12:56.334 fused_ordering(845) 00:12:56.334 fused_ordering(846) 00:12:56.334 fused_ordering(847) 00:12:56.334 fused_ordering(848) 00:12:56.334 fused_ordering(849) 00:12:56.334 fused_ordering(850) 00:12:56.334 fused_ordering(851) 00:12:56.334 fused_ordering(852) 00:12:56.334 fused_ordering(853) 00:12:56.334 fused_ordering(854) 00:12:56.334 fused_ordering(855) 00:12:56.334 fused_ordering(856) 00:12:56.334 fused_ordering(857) 00:12:56.334 fused_ordering(858) 00:12:56.334 fused_ordering(859) 00:12:56.334 fused_ordering(860) 00:12:56.334 fused_ordering(861) 00:12:56.334 fused_ordering(862) 00:12:56.334 fused_ordering(863) 00:12:56.334 fused_ordering(864) 00:12:56.334 fused_ordering(865) 00:12:56.334 fused_ordering(866) 00:12:56.334 fused_ordering(867) 00:12:56.334 fused_ordering(868) 00:12:56.334 fused_ordering(869) 00:12:56.334 fused_ordering(870) 00:12:56.334 fused_ordering(871) 00:12:56.334 fused_ordering(872) 00:12:56.334 fused_ordering(873) 00:12:56.334 fused_ordering(874) 00:12:56.334 fused_ordering(875) 00:12:56.334 fused_ordering(876) 00:12:56.334 fused_ordering(877) 00:12:56.334 fused_ordering(878) 00:12:56.334 fused_ordering(879) 00:12:56.334 fused_ordering(880) 00:12:56.334 fused_ordering(881) 00:12:56.334 fused_ordering(882) 00:12:56.334 fused_ordering(883) 00:12:56.334 fused_ordering(884) 00:12:56.334 fused_ordering(885) 00:12:56.334 fused_ordering(886) 00:12:56.334 fused_ordering(887) 00:12:56.334 fused_ordering(888) 00:12:56.334 fused_ordering(889) 00:12:56.334 fused_ordering(890) 00:12:56.334 fused_ordering(891) 00:12:56.334 fused_ordering(892) 00:12:56.334 fused_ordering(893) 00:12:56.334 fused_ordering(894) 00:12:56.334 fused_ordering(895) 00:12:56.334 fused_ordering(896) 00:12:56.334 fused_ordering(897) 00:12:56.334 fused_ordering(898) 00:12:56.334 fused_ordering(899) 00:12:56.334 fused_ordering(900) 00:12:56.334 fused_ordering(901) 00:12:56.334 fused_ordering(902) 00:12:56.334 fused_ordering(903) 00:12:56.334 fused_ordering(904) 00:12:56.334 fused_ordering(905) 00:12:56.334 fused_ordering(906) 00:12:56.334 fused_ordering(907) 00:12:56.334 fused_ordering(908) 00:12:56.334 fused_ordering(909) 00:12:56.334 fused_ordering(910) 00:12:56.334 fused_ordering(911) 00:12:56.334 fused_ordering(912) 00:12:56.334 fused_ordering(913) 00:12:56.334 fused_ordering(914) 00:12:56.334 fused_ordering(915) 00:12:56.334 fused_ordering(916) 00:12:56.334 fused_ordering(917) 00:12:56.334 fused_ordering(918) 00:12:56.334 fused_ordering(919) 00:12:56.334 fused_ordering(920) 00:12:56.334 fused_ordering(921) 00:12:56.334 fused_ordering(922) 00:12:56.334 fused_ordering(923) 00:12:56.334 fused_ordering(924) 00:12:56.334 fused_ordering(925) 00:12:56.334 fused_ordering(926) 00:12:56.334 fused_ordering(927) 00:12:56.334 fused_ordering(928) 00:12:56.334 fused_ordering(929) 00:12:56.334 fused_ordering(930) 00:12:56.334 fused_ordering(931) 00:12:56.334 fused_ordering(932) 00:12:56.334 fused_ordering(933) 00:12:56.334 fused_ordering(934) 00:12:56.334 fused_ordering(935) 00:12:56.334 fused_ordering(936) 00:12:56.334 fused_ordering(937) 00:12:56.334 fused_ordering(938) 00:12:56.334 fused_ordering(939) 00:12:56.334 fused_ordering(940) 00:12:56.334 fused_ordering(941) 00:12:56.334 fused_ordering(942) 00:12:56.334 fused_ordering(943) 00:12:56.334 fused_ordering(944) 00:12:56.334 fused_ordering(945) 00:12:56.334 fused_ordering(946) 00:12:56.334 fused_ordering(947) 00:12:56.334 fused_ordering(948) 00:12:56.334 fused_ordering(949) 00:12:56.334 fused_ordering(950) 00:12:56.334 fused_ordering(951) 00:12:56.334 fused_ordering(952) 00:12:56.334 fused_ordering(953) 00:12:56.334 fused_ordering(954) 00:12:56.334 fused_ordering(955) 00:12:56.334 fused_ordering(956) 00:12:56.334 fused_ordering(957) 00:12:56.334 fused_ordering(958) 00:12:56.334 fused_ordering(959) 00:12:56.334 fused_ordering(960) 00:12:56.334 fused_ordering(961) 00:12:56.334 fused_ordering(962) 00:12:56.334 fused_ordering(963) 00:12:56.334 fused_ordering(964) 00:12:56.334 fused_ordering(965) 00:12:56.334 fused_ordering(966) 00:12:56.334 fused_ordering(967) 00:12:56.334 fused_ordering(968) 00:12:56.334 fused_ordering(969) 00:12:56.334 fused_ordering(970) 00:12:56.334 fused_ordering(971) 00:12:56.334 fused_ordering(972) 00:12:56.334 fused_ordering(973) 00:12:56.334 fused_ordering(974) 00:12:56.334 fused_ordering(975) 00:12:56.334 fused_ordering(976) 00:12:56.334 fused_ordering(977) 00:12:56.334 fused_ordering(978) 00:12:56.334 fused_ordering(979) 00:12:56.334 fused_ordering(980) 00:12:56.334 fused_ordering(981) 00:12:56.334 fused_ordering(982) 00:12:56.334 fused_ordering(983) 00:12:56.334 fused_ordering(984) 00:12:56.334 fused_ordering(985) 00:12:56.334 fused_ordering(986) 00:12:56.334 fused_ordering(987) 00:12:56.334 fused_ordering(988) 00:12:56.334 fused_ordering(989) 00:12:56.334 fused_ordering(990) 00:12:56.334 fused_ordering(991) 00:12:56.334 fused_ordering(992) 00:12:56.334 fused_ordering(993) 00:12:56.334 fused_ordering(994) 00:12:56.334 fused_ordering(995) 00:12:56.334 fused_ordering(996) 00:12:56.334 fused_ordering(997) 00:12:56.334 fused_ordering(998) 00:12:56.334 fused_ordering(999) 00:12:56.334 fused_ordering(1000) 00:12:56.334 fused_ordering(1001) 00:12:56.334 fused_ordering(1002) 00:12:56.334 fused_ordering(1003) 00:12:56.334 fused_ordering(1004) 00:12:56.334 fused_ordering(1005) 00:12:56.334 fused_ordering(1006) 00:12:56.334 fused_ordering(1007) 00:12:56.334 fused_ordering(1008) 00:12:56.334 fused_ordering(1009) 00:12:56.334 fused_ordering(1010) 00:12:56.334 fused_ordering(1011) 00:12:56.334 fused_ordering(1012) 00:12:56.334 fused_ordering(1013) 00:12:56.334 fused_ordering(1014) 00:12:56.334 fused_ordering(1015) 00:12:56.334 fused_ordering(1016) 00:12:56.334 fused_ordering(1017) 00:12:56.334 fused_ordering(1018) 00:12:56.334 fused_ordering(1019) 00:12:56.334 fused_ordering(1020) 00:12:56.334 fused_ordering(1021) 00:12:56.334 fused_ordering(1022) 00:12:56.334 fused_ordering(1023) 00:12:56.334 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:56.334 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:56.334 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:56.334 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.335 rmmod nvme_tcp 00:12:56.335 rmmod nvme_fabrics 00:12:56.335 rmmod nvme_keyring 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1167776 ']' 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1167776 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1167776 ']' 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1167776 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1167776 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1167776' 00:12:56.335 killing process with pid 1167776 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1167776 00:12:56.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1167776 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.594 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.500 00:12:58.500 real 0m10.630s 00:12:58.500 user 0m4.887s 00:12:58.500 sys 0m5.790s 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.500 ************************************ 00:12:58.500 END TEST nvmf_fused_ordering 00:12:58.500 ************************************ 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.500 12:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.760 ************************************ 00:12:58.760 START TEST nvmf_ns_masking 00:12:58.760 ************************************ 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:58.760 * Looking for test storage... 00:12:58.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.760 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:58.761 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.761 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.761 --rc genhtml_branch_coverage=1 00:12:58.761 --rc genhtml_function_coverage=1 00:12:58.761 --rc genhtml_legend=1 00:12:58.761 --rc geninfo_all_blocks=1 00:12:58.761 --rc geninfo_unexecuted_blocks=1 00:12:58.761 00:12:58.761 ' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.761 --rc genhtml_branch_coverage=1 00:12:58.761 --rc genhtml_function_coverage=1 00:12:58.761 --rc genhtml_legend=1 00:12:58.761 --rc geninfo_all_blocks=1 00:12:58.761 --rc geninfo_unexecuted_blocks=1 00:12:58.761 00:12:58.761 ' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.761 --rc genhtml_branch_coverage=1 00:12:58.761 --rc genhtml_function_coverage=1 00:12:58.761 --rc genhtml_legend=1 00:12:58.761 --rc geninfo_all_blocks=1 00:12:58.761 --rc geninfo_unexecuted_blocks=1 00:12:58.761 00:12:58.761 ' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.761 --rc genhtml_branch_coverage=1 00:12:58.761 --rc genhtml_function_coverage=1 00:12:58.761 --rc genhtml_legend=1 00:12:58.761 --rc geninfo_all_blocks=1 00:12:58.761 --rc geninfo_unexecuted_blocks=1 00:12:58.761 00:12:58.761 ' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=68fb532e-7e36-4a33-bffa-0f8997dcebc6 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3e065440-079c-4f00-be3f-565fdeb9edc5 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fb412731-03a4-4c0e-bc0f-166947384b82 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.761 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:05.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:05.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:05.334 Found net devices under 0000:86:00.0: cvl_0_0 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:05.334 Found net devices under 0000:86:00.1: cvl_0_1 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:05.334 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:13:05.335 00:13:05.335 --- 10.0.0.2 ping statistics --- 00:13:05.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.335 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:13:05.335 00:13:05.335 --- 10.0.0.1 ping statistics --- 00:13:05.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.335 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:05.335 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1171565 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1171565 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1171565 ']' 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.335 [2024-10-15 12:53:25.085089] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:05.335 [2024-10-15 12:53:25.085142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.335 [2024-10-15 12:53:25.158964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.335 [2024-10-15 12:53:25.199914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.335 [2024-10-15 12:53:25.199951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.335 [2024-10-15 12:53:25.199958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.335 [2024-10-15 12:53:25.199963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.335 [2024-10-15 12:53:25.199968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.335 [2024-10-15 12:53:25.200521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.335 [2024-10-15 12:53:25.504421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:05.335 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:05.594 Malloc1 00:13:05.594 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:05.594 Malloc2 00:13:05.853 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:05.853 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:06.113 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.372 [2024-10-15 12:53:26.453548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb412731-03a4-4c0e-bc0f-166947384b82 -a 10.0.0.2 -s 4420 -i 4 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.372 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.908 [ 0]:0x1 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38689a96e4da434dbdc154840ba21a48 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38689a96e4da434dbdc154840ba21a48 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.908 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.908 [ 0]:0x1 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38689a96e4da434dbdc154840ba21a48 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38689a96e4da434dbdc154840ba21a48 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.908 [ 1]:0x2 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.908 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.167 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:09.427 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:09.427 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb412731-03a4-4c0e-bc0f-166947384b82 -a 10.0.0.2 -s 4420 -i 4 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:09.687 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.591 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.850 [ 0]:0x2 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.850 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.109 [ 0]:0x1 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38689a96e4da434dbdc154840ba21a48 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38689a96e4da434dbdc154840ba21a48 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.109 [ 1]:0x2 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.109 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.368 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.369 [ 0]:0x2 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.369 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.628 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:12.628 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb412731-03a4-4c0e-bc0f-166947384b82 -a 10.0.0.2 -s 4420 -i 4 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:12.887 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.786 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.786 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.786 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.786 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:14.787 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.787 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:14.787 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:14.787 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.044 [ 0]:0x1 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38689a96e4da434dbdc154840ba21a48 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38689a96e4da434dbdc154840ba21a48 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.044 [ 1]:0x2 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.044 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.303 [ 0]:0x2 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.303 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.562 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.563 [2024-10-15 12:53:35.808270] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:15.563 request: 00:13:15.563 { 00:13:15.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.563 "nsid": 2, 00:13:15.563 "host": "nqn.2016-06.io.spdk:host1", 00:13:15.563 "method": "nvmf_ns_remove_host", 00:13:15.563 "req_id": 1 00:13:15.563 } 00:13:15.563 Got JSON-RPC error response 00:13:15.563 response: 00:13:15.563 { 00:13:15.563 "code": -32602, 00:13:15.563 "message": "Invalid parameters" 00:13:15.563 } 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.563 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.822 [ 0]:0x2 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=425215008abc45b6a6c7fe8df0ddba55 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 425215008abc45b6a6c7fe8df0ddba55 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1173555 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1173555 /var/tmp/host.sock 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1173555 ']' 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:15.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.822 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.822 [2024-10-15 12:53:36.027176] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:15.822 [2024-10-15 12:53:36.027227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173555 ] 00:13:15.822 [2024-10-15 12:53:36.096222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.822 [2024-10-15 12:53:36.136520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.081 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.081 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:16.081 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.340 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.599 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 68fb532e-7e36-4a33-bffa-0f8997dcebc6 00:13:16.599 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:16.599 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 68FB532E7E364A33BFFA0F8997DCEBC6 -i 00:13:16.858 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3e065440-079c-4f00-be3f-565fdeb9edc5 00:13:16.858 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:16.858 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3E065440079C4F00BE3F565FDEB9EDC5 -i 00:13:16.858 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.118 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:17.377 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:17.377 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:17.635 nvme0n1 00:13:17.635 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:17.635 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:17.892 nvme1n2 00:13:17.892 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:17.892 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:17.892 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:17.892 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:17.892 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:18.150 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:18.150 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:18.150 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:18.150 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:18.408 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 68fb532e-7e36-4a33-bffa-0f8997dcebc6 == \6\8\f\b\5\3\2\e\-\7\e\3\6\-\4\a\3\3\-\b\f\f\a\-\0\f\8\9\9\7\d\c\e\b\c\6 ]] 00:13:18.408 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:18.408 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:18.408 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3e065440-079c-4f00-be3f-565fdeb9edc5 == \3\e\0\6\5\4\4\0\-\0\7\9\c\-\4\f\0\0\-\b\e\3\f\-\5\6\5\f\d\e\b\9\e\d\c\5 ]] 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1173555 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1173555 ']' 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1173555 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1173555 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1173555' 00:13:18.667 killing process with pid 1173555 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1173555 00:13:18.667 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1173555 00:13:18.926 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.185 rmmod nvme_tcp 00:13:19.185 rmmod nvme_fabrics 00:13:19.185 rmmod nvme_keyring 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1171565 ']' 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1171565 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1171565 ']' 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1171565 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1171565 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1171565' 00:13:19.185 killing process with pid 1171565 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1171565 00:13:19.185 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1171565 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.444 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.983 00:13:21.983 real 0m22.871s 00:13:21.983 user 0m24.137s 00:13:21.983 sys 0m6.767s 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.983 ************************************ 00:13:21.983 END TEST nvmf_ns_masking 00:13:21.983 ************************************ 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.983 ************************************ 00:13:21.983 START TEST nvmf_nvme_cli 00:13:21.983 ************************************ 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:21.983 * Looking for test storage... 00:13:21.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:21.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.983 --rc genhtml_branch_coverage=1 00:13:21.983 --rc genhtml_function_coverage=1 00:13:21.983 --rc genhtml_legend=1 00:13:21.983 --rc geninfo_all_blocks=1 00:13:21.983 --rc geninfo_unexecuted_blocks=1 00:13:21.983 00:13:21.983 ' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:21.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.983 --rc genhtml_branch_coverage=1 00:13:21.983 --rc genhtml_function_coverage=1 00:13:21.983 --rc genhtml_legend=1 00:13:21.983 --rc geninfo_all_blocks=1 00:13:21.983 --rc geninfo_unexecuted_blocks=1 00:13:21.983 00:13:21.983 ' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.983 --rc genhtml_branch_coverage=1 00:13:21.983 --rc genhtml_function_coverage=1 00:13:21.983 --rc genhtml_legend=1 00:13:21.983 --rc geninfo_all_blocks=1 00:13:21.983 --rc geninfo_unexecuted_blocks=1 00:13:21.983 00:13:21.983 ' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.983 --rc genhtml_branch_coverage=1 00:13:21.983 --rc genhtml_function_coverage=1 00:13:21.983 --rc genhtml_legend=1 00:13:21.983 --rc geninfo_all_blocks=1 00:13:21.983 --rc geninfo_unexecuted_blocks=1 00:13:21.983 00:13:21.983 ' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.983 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.984 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.556 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:28.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:28.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:28.557 Found net devices under 0000:86:00.0: cvl_0_0 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:28.557 Found net devices under 0000:86:00.1: cvl_0_1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:13:28.557 00:13:28.557 --- 10.0.0.2 ping statistics --- 00:13:28.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.557 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:13:28.557 00:13:28.557 --- 10.0.0.1 ping statistics --- 00:13:28.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.557 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1177794 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1177794 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1177794 ']' 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.557 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.557 [2024-10-15 12:53:48.021135] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:28.557 [2024-10-15 12:53:48.021182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.557 [2024-10-15 12:53:48.093706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.557 [2024-10-15 12:53:48.134958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.557 [2024-10-15 12:53:48.134994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.557 [2024-10-15 12:53:48.135003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.557 [2024-10-15 12:53:48.135008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.557 [2024-10-15 12:53:48.135013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.557 [2024-10-15 12:53:48.136545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.557 [2024-10-15 12:53:48.136662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.557 [2024-10-15 12:53:48.136746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.557 [2024-10-15 12:53:48.136746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.557 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 [2024-10-15 12:53:48.280840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 Malloc0 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 Malloc1 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 [2024-10-15 12:53:48.381643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:28.558 00:13:28.558 Discovery Log Number of Records 2, Generation counter 2 00:13:28.558 =====Discovery Log Entry 0====== 00:13:28.558 trtype: tcp 00:13:28.558 adrfam: ipv4 00:13:28.558 subtype: current discovery subsystem 00:13:28.558 treq: not required 00:13:28.558 portid: 0 00:13:28.558 trsvcid: 4420 00:13:28.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:28.558 traddr: 10.0.0.2 00:13:28.558 eflags: explicit discovery connections, duplicate discovery information 00:13:28.558 sectype: none 00:13:28.558 =====Discovery Log Entry 1====== 00:13:28.558 trtype: tcp 00:13:28.558 adrfam: ipv4 00:13:28.558 subtype: nvme subsystem 00:13:28.558 treq: not required 00:13:28.558 portid: 0 00:13:28.558 trsvcid: 4420 00:13:28.558 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:28.558 traddr: 10.0.0.2 00:13:28.558 eflags: none 00:13:28.558 sectype: none 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:28.558 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:29.495 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:32.032 /dev/nvme0n2 ]] 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:32.032 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.292 rmmod nvme_tcp 00:13:32.292 rmmod nvme_fabrics 00:13:32.292 rmmod nvme_keyring 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1177794 ']' 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1177794 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1177794 ']' 00:13:32.292 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1177794 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1177794 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1177794' 00:13:32.293 killing process with pid 1177794 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1177794 00:13:32.293 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1177794 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.552 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.090 00:13:35.090 real 0m13.099s 00:13:35.090 user 0m20.281s 00:13:35.090 sys 0m5.093s 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.090 ************************************ 00:13:35.090 END TEST nvmf_nvme_cli 00:13:35.090 ************************************ 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.090 ************************************ 00:13:35.090 START TEST nvmf_vfio_user 00:13:35.090 ************************************ 00:13:35.090 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:35.090 * Looking for test storage... 00:13:35.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.090 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.091 --rc genhtml_branch_coverage=1 00:13:35.091 --rc genhtml_function_coverage=1 00:13:35.091 --rc genhtml_legend=1 00:13:35.091 --rc geninfo_all_blocks=1 00:13:35.091 --rc geninfo_unexecuted_blocks=1 00:13:35.091 00:13:35.091 ' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.091 --rc genhtml_branch_coverage=1 00:13:35.091 --rc genhtml_function_coverage=1 00:13:35.091 --rc genhtml_legend=1 00:13:35.091 --rc geninfo_all_blocks=1 00:13:35.091 --rc geninfo_unexecuted_blocks=1 00:13:35.091 00:13:35.091 ' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.091 --rc genhtml_branch_coverage=1 00:13:35.091 --rc genhtml_function_coverage=1 00:13:35.091 --rc genhtml_legend=1 00:13:35.091 --rc geninfo_all_blocks=1 00:13:35.091 --rc geninfo_unexecuted_blocks=1 00:13:35.091 00:13:35.091 ' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.091 --rc genhtml_branch_coverage=1 00:13:35.091 --rc genhtml_function_coverage=1 00:13:35.091 --rc genhtml_legend=1 00:13:35.091 --rc geninfo_all_blocks=1 00:13:35.091 --rc geninfo_unexecuted_blocks=1 00:13:35.091 00:13:35.091 ' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1179087 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1179087' 00:13:35.091 Process pid: 1179087 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1179087 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1179087 ']' 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.091 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:35.091 [2024-10-15 12:53:55.212119] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:35.091 [2024-10-15 12:53:55.212165] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.091 [2024-10-15 12:53:55.280843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.091 [2024-10-15 12:53:55.320063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.091 [2024-10-15 12:53:55.320102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.091 [2024-10-15 12:53:55.320112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.091 [2024-10-15 12:53:55.320117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.092 [2024-10-15 12:53:55.320123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.092 [2024-10-15 12:53:55.321744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.092 [2024-10-15 12:53:55.321851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.092 [2024-10-15 12:53:55.321938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.092 [2024-10-15 12:53:55.321938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.350 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.350 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:35.350 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:36.289 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:36.548 Malloc1 00:13:36.548 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:36.807 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:37.066 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:37.325 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:37.325 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:37.325 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:37.325 Malloc2 00:13:37.584 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:37.584 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:37.892 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:38.152 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:38.152 [2024-10-15 12:53:58.286305] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:38.152 [2024-10-15 12:53:58.286353] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179572 ] 00:13:38.152 [2024-10-15 12:53:58.313905] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:38.152 [2024-10-15 12:53:58.323909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:38.152 [2024-10-15 12:53:58.323928] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb6d67e2000 00:13:38.152 [2024-10-15 12:53:58.324908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.325911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.326919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.327925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.328933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.329934] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.330945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.331944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:38.152 [2024-10-15 12:53:58.332954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:38.152 [2024-10-15 12:53:58.332965] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb6d67d7000 00:13:38.152 [2024-10-15 12:53:58.333884] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:38.153 [2024-10-15 12:53:58.346862] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:38.153 [2024-10-15 12:53:58.346884] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:38.153 [2024-10-15 12:53:58.349053] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:38.153 [2024-10-15 12:53:58.349091] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:38.153 [2024-10-15 12:53:58.349164] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:38.153 [2024-10-15 12:53:58.349178] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:38.153 [2024-10-15 12:53:58.349186] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:38.153 [2024-10-15 12:53:58.350606] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:38.153 [2024-10-15 12:53:58.350614] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:38.153 [2024-10-15 12:53:58.350620] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:38.153 [2024-10-15 12:53:58.351062] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:38.153 [2024-10-15 12:53:58.351069] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:38.153 [2024-10-15 12:53:58.351076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.352065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:38.153 [2024-10-15 12:53:58.352072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.353071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:38.153 [2024-10-15 12:53:58.353079] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:38.153 [2024-10-15 12:53:58.353083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.353089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.353193] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:38.153 [2024-10-15 12:53:58.353198] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.353202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:38.153 [2024-10-15 12:53:58.354085] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:38.153 [2024-10-15 12:53:58.355091] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:38.153 [2024-10-15 12:53:58.356095] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:38.153 [2024-10-15 12:53:58.357098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.153 [2024-10-15 12:53:58.357174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:38.153 [2024-10-15 12:53:58.358109] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:38.153 [2024-10-15 12:53:58.358116] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:38.153 [2024-10-15 12:53:58.358121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358137] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:38.153 [2024-10-15 12:53:58.358145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358158] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:38.153 [2024-10-15 12:53:58.358162] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:38.153 [2024-10-15 12:53:58.358166] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.153 [2024-10-15 12:53:58.358178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358226] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:38.153 [2024-10-15 12:53:58.358231] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:38.153 [2024-10-15 12:53:58.358235] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:38.153 [2024-10-15 12:53:58.358238] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:38.153 [2024-10-15 12:53:58.358242] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:38.153 [2024-10-15 12:53:58.358247] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:38.153 [2024-10-15 12:53:58.358251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.153 [2024-10-15 12:53:58.358295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.153 [2024-10-15 12:53:58.358303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.153 [2024-10-15 12:53:58.358310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.153 [2024-10-15 12:53:58.358314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358342] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:38.153 [2024-10-15 12:53:58.358351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358435] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358442] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:38.153 [2024-10-15 12:53:58.358446] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:38.153 [2024-10-15 12:53:58.358449] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.153 [2024-10-15 12:53:58.358454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358475] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:38.153 [2024-10-15 12:53:58.358485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358498] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:38.153 [2024-10-15 12:53:58.358502] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:38.153 [2024-10-15 12:53:58.358505] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.153 [2024-10-15 12:53:58.358510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:38.153 [2024-10-15 12:53:58.358540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:38.153 [2024-10-15 12:53:58.358553] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:38.153 [2024-10-15 12:53:58.358557] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:38.153 [2024-10-15 12:53:58.358559] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.153 [2024-10-15 12:53:58.358565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:38.153 [2024-10-15 12:53:58.358574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358612] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358616] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:38.154 [2024-10-15 12:53:58.358620] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:38.154 [2024-10-15 12:53:58.358625] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:38.154 [2024-10-15 12:53:58.358642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358720] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:38.154 [2024-10-15 12:53:58.358724] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:38.154 [2024-10-15 12:53:58.358727] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:38.154 [2024-10-15 12:53:58.358730] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:38.154 [2024-10-15 12:53:58.358733] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:38.154 [2024-10-15 12:53:58.358739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:38.154 [2024-10-15 12:53:58.358745] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:38.154 [2024-10-15 12:53:58.358749] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:38.154 [2024-10-15 12:53:58.358752] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.154 [2024-10-15 12:53:58.358759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358765] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:38.154 [2024-10-15 12:53:58.358768] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:38.154 [2024-10-15 12:53:58.358771] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.154 [2024-10-15 12:53:58.358777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358784] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:38.154 [2024-10-15 12:53:58.358788] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:38.154 [2024-10-15 12:53:58.358791] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:38.154 [2024-10-15 12:53:58.358797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:38.154 [2024-10-15 12:53:58.358802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:38.154 [2024-10-15 12:53:58.358827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:38.154 ===================================================== 00:13:38.154 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:38.154 ===================================================== 00:13:38.154 Controller Capabilities/Features 00:13:38.154 ================================ 00:13:38.154 Vendor ID: 4e58 00:13:38.154 Subsystem Vendor ID: 4e58 00:13:38.154 Serial Number: SPDK1 00:13:38.154 Model Number: SPDK bdev Controller 00:13:38.154 Firmware Version: 25.01 00:13:38.154 Recommended Arb Burst: 6 00:13:38.154 IEEE OUI Identifier: 8d 6b 50 00:13:38.154 Multi-path I/O 00:13:38.154 May have multiple subsystem ports: Yes 00:13:38.154 May have multiple controllers: Yes 00:13:38.154 Associated with SR-IOV VF: No 00:13:38.154 Max Data Transfer Size: 131072 00:13:38.154 Max Number of Namespaces: 32 00:13:38.154 Max Number of I/O Queues: 127 00:13:38.154 NVMe Specification Version (VS): 1.3 00:13:38.154 NVMe Specification Version (Identify): 1.3 00:13:38.154 Maximum Queue Entries: 256 00:13:38.154 Contiguous Queues Required: Yes 00:13:38.154 Arbitration Mechanisms Supported 00:13:38.154 Weighted Round Robin: Not Supported 00:13:38.154 Vendor Specific: Not Supported 00:13:38.154 Reset Timeout: 15000 ms 00:13:38.154 Doorbell Stride: 4 bytes 00:13:38.154 NVM Subsystem Reset: Not Supported 00:13:38.154 Command Sets Supported 00:13:38.154 NVM Command Set: Supported 00:13:38.154 Boot Partition: Not Supported 00:13:38.154 Memory Page Size Minimum: 4096 bytes 00:13:38.154 Memory Page Size Maximum: 4096 bytes 00:13:38.154 Persistent Memory Region: Not Supported 00:13:38.154 Optional Asynchronous Events Supported 00:13:38.154 Namespace Attribute Notices: Supported 00:13:38.154 Firmware Activation Notices: Not Supported 00:13:38.154 ANA Change Notices: Not Supported 00:13:38.154 PLE Aggregate Log Change Notices: Not Supported 00:13:38.154 LBA Status Info Alert Notices: Not Supported 00:13:38.154 EGE Aggregate Log Change Notices: Not Supported 00:13:38.154 Normal NVM Subsystem Shutdown event: Not Supported 00:13:38.154 Zone Descriptor Change Notices: Not Supported 00:13:38.154 Discovery Log Change Notices: Not Supported 00:13:38.154 Controller Attributes 00:13:38.154 128-bit Host Identifier: Supported 00:13:38.154 Non-Operational Permissive Mode: Not Supported 00:13:38.154 NVM Sets: Not Supported 00:13:38.154 Read Recovery Levels: Not Supported 00:13:38.154 Endurance Groups: Not Supported 00:13:38.154 Predictable Latency Mode: Not Supported 00:13:38.154 Traffic Based Keep ALive: Not Supported 00:13:38.154 Namespace Granularity: Not Supported 00:13:38.154 SQ Associations: Not Supported 00:13:38.154 UUID List: Not Supported 00:13:38.154 Multi-Domain Subsystem: Not Supported 00:13:38.154 Fixed Capacity Management: Not Supported 00:13:38.154 Variable Capacity Management: Not Supported 00:13:38.154 Delete Endurance Group: Not Supported 00:13:38.154 Delete NVM Set: Not Supported 00:13:38.154 Extended LBA Formats Supported: Not Supported 00:13:38.154 Flexible Data Placement Supported: Not Supported 00:13:38.154 00:13:38.154 Controller Memory Buffer Support 00:13:38.154 ================================ 00:13:38.154 Supported: No 00:13:38.154 00:13:38.154 Persistent Memory Region Support 00:13:38.154 ================================ 00:13:38.154 Supported: No 00:13:38.154 00:13:38.154 Admin Command Set Attributes 00:13:38.154 ============================ 00:13:38.154 Security Send/Receive: Not Supported 00:13:38.154 Format NVM: Not Supported 00:13:38.154 Firmware Activate/Download: Not Supported 00:13:38.154 Namespace Management: Not Supported 00:13:38.154 Device Self-Test: Not Supported 00:13:38.154 Directives: Not Supported 00:13:38.154 NVMe-MI: Not Supported 00:13:38.154 Virtualization Management: Not Supported 00:13:38.154 Doorbell Buffer Config: Not Supported 00:13:38.154 Get LBA Status Capability: Not Supported 00:13:38.154 Command & Feature Lockdown Capability: Not Supported 00:13:38.154 Abort Command Limit: 4 00:13:38.154 Async Event Request Limit: 4 00:13:38.154 Number of Firmware Slots: N/A 00:13:38.154 Firmware Slot 1 Read-Only: N/A 00:13:38.154 Firmware Activation Without Reset: N/A 00:13:38.154 Multiple Update Detection Support: N/A 00:13:38.154 Firmware Update Granularity: No Information Provided 00:13:38.154 Per-Namespace SMART Log: No 00:13:38.154 Asymmetric Namespace Access Log Page: Not Supported 00:13:38.154 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:38.154 Command Effects Log Page: Supported 00:13:38.154 Get Log Page Extended Data: Supported 00:13:38.154 Telemetry Log Pages: Not Supported 00:13:38.154 Persistent Event Log Pages: Not Supported 00:13:38.154 Supported Log Pages Log Page: May Support 00:13:38.154 Commands Supported & Effects Log Page: Not Supported 00:13:38.154 Feature Identifiers & Effects Log Page:May Support 00:13:38.154 NVMe-MI Commands & Effects Log Page: May Support 00:13:38.154 Data Area 4 for Telemetry Log: Not Supported 00:13:38.154 Error Log Page Entries Supported: 128 00:13:38.154 Keep Alive: Supported 00:13:38.154 Keep Alive Granularity: 10000 ms 00:13:38.154 00:13:38.154 NVM Command Set Attributes 00:13:38.154 ========================== 00:13:38.154 Submission Queue Entry Size 00:13:38.154 Max: 64 00:13:38.154 Min: 64 00:13:38.154 Completion Queue Entry Size 00:13:38.154 Max: 16 00:13:38.154 Min: 16 00:13:38.155 Number of Namespaces: 32 00:13:38.155 Compare Command: Supported 00:13:38.155 Write Uncorrectable Command: Not Supported 00:13:38.155 Dataset Management Command: Supported 00:13:38.155 Write Zeroes Command: Supported 00:13:38.155 Set Features Save Field: Not Supported 00:13:38.155 Reservations: Not Supported 00:13:38.155 Timestamp: Not Supported 00:13:38.155 Copy: Supported 00:13:38.155 Volatile Write Cache: Present 00:13:38.155 Atomic Write Unit (Normal): 1 00:13:38.155 Atomic Write Unit (PFail): 1 00:13:38.155 Atomic Compare & Write Unit: 1 00:13:38.155 Fused Compare & Write: Supported 00:13:38.155 Scatter-Gather List 00:13:38.155 SGL Command Set: Supported (Dword aligned) 00:13:38.155 SGL Keyed: Not Supported 00:13:38.155 SGL Bit Bucket Descriptor: Not Supported 00:13:38.155 SGL Metadata Pointer: Not Supported 00:13:38.155 Oversized SGL: Not Supported 00:13:38.155 SGL Metadata Address: Not Supported 00:13:38.155 SGL Offset: Not Supported 00:13:38.155 Transport SGL Data Block: Not Supported 00:13:38.155 Replay Protected Memory Block: Not Supported 00:13:38.155 00:13:38.155 Firmware Slot Information 00:13:38.155 ========================= 00:13:38.155 Active slot: 1 00:13:38.155 Slot 1 Firmware Revision: 25.01 00:13:38.155 00:13:38.155 00:13:38.155 Commands Supported and Effects 00:13:38.155 ============================== 00:13:38.155 Admin Commands 00:13:38.155 -------------- 00:13:38.155 Get Log Page (02h): Supported 00:13:38.155 Identify (06h): Supported 00:13:38.155 Abort (08h): Supported 00:13:38.155 Set Features (09h): Supported 00:13:38.155 Get Features (0Ah): Supported 00:13:38.155 Asynchronous Event Request (0Ch): Supported 00:13:38.155 Keep Alive (18h): Supported 00:13:38.155 I/O Commands 00:13:38.155 ------------ 00:13:38.155 Flush (00h): Supported LBA-Change 00:13:38.155 Write (01h): Supported LBA-Change 00:13:38.155 Read (02h): Supported 00:13:38.155 Compare (05h): Supported 00:13:38.155 Write Zeroes (08h): Supported LBA-Change 00:13:38.155 Dataset Management (09h): Supported LBA-Change 00:13:38.155 Copy (19h): Supported LBA-Change 00:13:38.155 00:13:38.155 Error Log 00:13:38.155 ========= 00:13:38.155 00:13:38.155 Arbitration 00:13:38.155 =========== 00:13:38.155 Arbitration Burst: 1 00:13:38.155 00:13:38.155 Power Management 00:13:38.155 ================ 00:13:38.155 Number of Power States: 1 00:13:38.155 Current Power State: Power State #0 00:13:38.155 Power State #0: 00:13:38.155 Max Power: 0.00 W 00:13:38.155 Non-Operational State: Operational 00:13:38.155 Entry Latency: Not Reported 00:13:38.155 Exit Latency: Not Reported 00:13:38.155 Relative Read Throughput: 0 00:13:38.155 Relative Read Latency: 0 00:13:38.155 Relative Write Throughput: 0 00:13:38.155 Relative Write Latency: 0 00:13:38.155 Idle Power: Not Reported 00:13:38.155 Active Power: Not Reported 00:13:38.155 Non-Operational Permissive Mode: Not Supported 00:13:38.155 00:13:38.155 Health Information 00:13:38.155 ================== 00:13:38.155 Critical Warnings: 00:13:38.155 Available Spare Space: OK 00:13:38.155 Temperature: OK 00:13:38.155 Device Reliability: OK 00:13:38.155 Read Only: No 00:13:38.155 Volatile Memory Backup: OK 00:13:38.155 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:38.155 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:38.155 Available Spare: 0% 00:13:38.155 Available Sp[2024-10-15 12:53:58.358908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:38.155 [2024-10-15 12:53:58.358917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:38.155 [2024-10-15 12:53:58.358940] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:38.155 [2024-10-15 12:53:58.358949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.155 [2024-10-15 12:53:58.358955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.155 [2024-10-15 12:53:58.358960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.155 [2024-10-15 12:53:58.358965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.155 [2024-10-15 12:53:58.361608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:38.155 [2024-10-15 12:53:58.361618] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:38.155 [2024-10-15 12:53:58.362135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.155 [2024-10-15 12:53:58.362182] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:38.155 [2024-10-15 12:53:58.362188] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:38.155 [2024-10-15 12:53:58.363143] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:38.155 [2024-10-15 12:53:58.363153] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:38.155 [2024-10-15 12:53:58.363207] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:38.155 [2024-10-15 12:53:58.364174] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:38.155 are Threshold: 0% 00:13:38.155 Life Percentage Used: 0% 00:13:38.155 Data Units Read: 0 00:13:38.155 Data Units Written: 0 00:13:38.155 Host Read Commands: 0 00:13:38.155 Host Write Commands: 0 00:13:38.155 Controller Busy Time: 0 minutes 00:13:38.155 Power Cycles: 0 00:13:38.155 Power On Hours: 0 hours 00:13:38.155 Unsafe Shutdowns: 0 00:13:38.155 Unrecoverable Media Errors: 0 00:13:38.155 Lifetime Error Log Entries: 0 00:13:38.155 Warning Temperature Time: 0 minutes 00:13:38.155 Critical Temperature Time: 0 minutes 00:13:38.155 00:13:38.155 Number of Queues 00:13:38.155 ================ 00:13:38.155 Number of I/O Submission Queues: 127 00:13:38.155 Number of I/O Completion Queues: 127 00:13:38.155 00:13:38.155 Active Namespaces 00:13:38.155 ================= 00:13:38.155 Namespace ID:1 00:13:38.155 Error Recovery Timeout: Unlimited 00:13:38.155 Command Set Identifier: NVM (00h) 00:13:38.155 Deallocate: Supported 00:13:38.155 Deallocated/Unwritten Error: Not Supported 00:13:38.155 Deallocated Read Value: Unknown 00:13:38.155 Deallocate in Write Zeroes: Not Supported 00:13:38.155 Deallocated Guard Field: 0xFFFF 00:13:38.155 Flush: Supported 00:13:38.155 Reservation: Supported 00:13:38.155 Namespace Sharing Capabilities: Multiple Controllers 00:13:38.155 Size (in LBAs): 131072 (0GiB) 00:13:38.155 Capacity (in LBAs): 131072 (0GiB) 00:13:38.155 Utilization (in LBAs): 131072 (0GiB) 00:13:38.155 NGUID: D067CDB417104F27BC02BF1C02FD8779 00:13:38.155 UUID: d067cdb4-1710-4f27-bc02-bf1c02fd8779 00:13:38.155 Thin Provisioning: Not Supported 00:13:38.155 Per-NS Atomic Units: Yes 00:13:38.155 Atomic Boundary Size (Normal): 0 00:13:38.155 Atomic Boundary Size (PFail): 0 00:13:38.155 Atomic Boundary Offset: 0 00:13:38.155 Maximum Single Source Range Length: 65535 00:13:38.155 Maximum Copy Length: 65535 00:13:38.155 Maximum Source Range Count: 1 00:13:38.155 NGUID/EUI64 Never Reused: No 00:13:38.155 Namespace Write Protected: No 00:13:38.155 Number of LBA Formats: 1 00:13:38.155 Current LBA Format: LBA Format #00 00:13:38.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:38.155 00:13:38.155 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:38.415 [2024-10-15 12:53:58.582371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.712 Initializing NVMe Controllers 00:13:43.712 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.712 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:43.712 Initialization complete. Launching workers. 00:13:43.712 ======================================================== 00:13:43.712 Latency(us) 00:13:43.712 Device Information : IOPS MiB/s Average min max 00:13:43.712 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39946.15 156.04 3204.13 951.02 7627.64 00:13:43.712 ======================================================== 00:13:43.712 Total : 39946.15 156.04 3204.13 951.02 7627.64 00:13:43.712 00:13:43.712 [2024-10-15 12:54:03.599604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.712 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:43.712 [2024-10-15 12:54:03.824631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.986 Initializing NVMe Controllers 00:13:48.986 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:48.986 Initialization complete. Launching workers. 00:13:48.986 ======================================================== 00:13:48.986 Latency(us) 00:13:48.986 Device Information : IOPS MiB/s Average min max 00:13:48.986 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.32 62.72 7976.01 5986.47 8981.36 00:13:48.986 ======================================================== 00:13:48.986 Total : 16057.32 62.72 7976.01 5986.47 8981.36 00:13:48.986 00:13:48.986 [2024-10-15 12:54:08.865988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.986 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:48.986 [2024-10-15 12:54:09.061959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.254 [2024-10-15 12:54:14.150014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.254 Initializing NVMe Controllers 00:13:54.254 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.254 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:54.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:54.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:54.254 Initialization complete. Launching workers. 00:13:54.254 Starting thread on core 2 00:13:54.254 Starting thread on core 3 00:13:54.254 Starting thread on core 1 00:13:54.254 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:54.254 [2024-10-15 12:54:14.431973] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:57.544 [2024-10-15 12:54:17.486114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:57.544 Initializing NVMe Controllers 00:13:57.544 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:57.544 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:57.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:57.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:57.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:57.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:57.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:57.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:57.544 Initialization complete. Launching workers. 00:13:57.544 Starting thread on core 1 with urgent priority queue 00:13:57.544 Starting thread on core 2 with urgent priority queue 00:13:57.544 Starting thread on core 3 with urgent priority queue 00:13:57.544 Starting thread on core 0 with urgent priority queue 00:13:57.544 SPDK bdev Controller (SPDK1 ) core 0: 8712.00 IO/s 11.48 secs/100000 ios 00:13:57.544 SPDK bdev Controller (SPDK1 ) core 1: 8934.67 IO/s 11.19 secs/100000 ios 00:13:57.544 SPDK bdev Controller (SPDK1 ) core 2: 9567.33 IO/s 10.45 secs/100000 ios 00:13:57.544 SPDK bdev Controller (SPDK1 ) core 3: 8035.67 IO/s 12.44 secs/100000 ios 00:13:57.544 ======================================================== 00:13:57.544 00:13:57.544 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:57.544 [2024-10-15 12:54:17.757571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:57.544 Initializing NVMe Controllers 00:13:57.544 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:57.544 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:57.544 Namespace ID: 1 size: 0GB 00:13:57.544 Initialization complete. 00:13:57.544 INFO: using host memory buffer for IO 00:13:57.544 Hello world! 00:13:57.544 [2024-10-15 12:54:17.789797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:57.544 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:57.802 [2024-10-15 12:54:18.056061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:58.789 Initializing NVMe Controllers 00:13:58.789 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:58.789 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:58.789 Initialization complete. Launching workers. 00:13:58.789 submit (in ns) avg, min, max = 8924.2, 3174.3, 4001718.1 00:13:58.789 complete (in ns) avg, min, max = 19261.9, 1747.6, 4993899.0 00:13:58.789 00:13:58.789 Submit histogram 00:13:58.789 ================ 00:13:58.789 Range in us Cumulative Count 00:13:58.789 3.170 - 3.185: 0.0418% ( 7) 00:13:58.789 3.185 - 3.200: 0.1255% ( 14) 00:13:58.789 3.200 - 3.215: 0.3705% ( 41) 00:13:58.789 3.215 - 3.230: 0.7530% ( 64) 00:13:58.789 3.230 - 3.246: 1.3267% ( 96) 00:13:58.789 3.246 - 3.261: 2.2829% ( 160) 00:13:58.789 3.261 - 3.276: 4.4822% ( 368) 00:13:58.789 3.276 - 3.291: 8.8328% ( 728) 00:13:58.789 3.291 - 3.307: 14.4744% ( 944) 00:13:58.789 3.307 - 3.322: 21.8192% ( 1229) 00:13:58.789 3.322 - 3.337: 28.4468% ( 1109) 00:13:58.789 3.337 - 3.352: 34.3274% ( 984) 00:13:58.789 3.352 - 3.368: 40.2259% ( 987) 00:13:58.789 3.368 - 3.383: 46.1184% ( 986) 00:13:58.789 3.383 - 3.398: 52.3755% ( 1047) 00:13:58.789 3.398 - 3.413: 57.7900% ( 906) 00:13:58.789 3.413 - 3.429: 63.5152% ( 958) 00:13:58.789 3.429 - 3.444: 69.7902% ( 1050) 00:13:58.789 3.444 - 3.459: 74.3501% ( 763) 00:13:58.789 3.459 - 3.474: 79.1072% ( 796) 00:13:58.789 3.474 - 3.490: 82.3104% ( 536) 00:13:58.789 3.490 - 3.505: 84.9340% ( 439) 00:13:58.789 3.505 - 3.520: 86.3922% ( 244) 00:13:58.789 3.520 - 3.535: 87.1272% ( 123) 00:13:58.789 3.535 - 3.550: 87.6173% ( 82) 00:13:58.789 3.550 - 3.566: 88.0715% ( 76) 00:13:58.789 3.566 - 3.581: 88.7647% ( 116) 00:13:58.789 3.581 - 3.596: 89.5237% ( 127) 00:13:58.790 3.596 - 3.611: 90.3663% ( 141) 00:13:58.790 3.611 - 3.627: 91.5616% ( 200) 00:13:58.790 3.627 - 3.642: 92.4640% ( 151) 00:13:58.790 3.642 - 3.657: 93.3126% ( 142) 00:13:58.790 3.657 - 3.672: 94.3764% ( 178) 00:13:58.790 3.672 - 3.688: 95.2967% ( 154) 00:13:58.790 3.688 - 3.703: 96.2230% ( 155) 00:13:58.790 3.703 - 3.718: 97.1075% ( 148) 00:13:58.790 3.718 - 3.733: 97.7888% ( 114) 00:13:58.790 3.733 - 3.749: 98.2669% ( 80) 00:13:58.790 3.749 - 3.764: 98.6135% ( 58) 00:13:58.790 3.764 - 3.779: 98.8705% ( 43) 00:13:58.790 3.779 - 3.794: 99.0856% ( 36) 00:13:58.790 3.794 - 3.810: 99.2410% ( 26) 00:13:58.790 3.810 - 3.825: 99.3725% ( 22) 00:13:58.790 3.825 - 3.840: 99.4263% ( 9) 00:13:58.790 3.840 - 3.855: 99.4681% ( 7) 00:13:58.790 3.855 - 3.870: 99.4860% ( 3) 00:13:58.790 3.870 - 3.886: 99.4980% ( 2) 00:13:58.790 3.886 - 3.901: 99.5040% ( 1) 00:13:58.790 3.901 - 3.931: 99.5100% ( 1) 00:13:58.790 3.931 - 3.962: 99.5159% ( 1) 00:13:58.790 3.992 - 4.023: 99.5219% ( 1) 00:13:58.790 4.053 - 4.084: 99.5279% ( 1) 00:13:58.790 4.937 - 4.968: 99.5339% ( 1) 00:13:58.790 4.968 - 4.998: 99.5398% ( 1) 00:13:58.790 5.790 - 5.821: 99.5458% ( 1) 00:13:58.790 5.912 - 5.943: 99.5518% ( 1) 00:13:58.790 5.943 - 5.973: 99.5697% ( 3) 00:13:58.790 5.973 - 6.004: 99.5757% ( 1) 00:13:58.790 6.004 - 6.034: 99.5817% ( 1) 00:13:58.790 6.034 - 6.065: 99.5876% ( 1) 00:13:58.790 6.126 - 6.156: 99.5936% ( 1) 00:13:58.790 6.217 - 6.248: 99.6175% ( 4) 00:13:58.790 6.248 - 6.278: 99.6235% ( 1) 00:13:58.790 6.400 - 6.430: 99.6295% ( 1) 00:13:58.790 6.461 - 6.491: 99.6355% ( 1) 00:13:58.790 6.522 - 6.552: 99.6414% ( 1) 00:13:58.790 6.552 - 6.583: 99.6474% ( 1) 00:13:58.790 6.613 - 6.644: 99.6534% ( 1) 00:13:58.790 6.766 - 6.796: 99.6594% ( 1) 00:13:58.790 6.796 - 6.827: 99.6653% ( 1) 00:13:58.790 6.827 - 6.857: 99.6713% ( 1) 00:13:58.790 6.888 - 6.918: 99.6773% ( 1) 00:13:58.790 7.162 - 7.192: 99.6833% ( 1) 00:13:58.790 7.284 - 7.314: 99.6892% ( 1) 00:13:58.790 7.314 - 7.345: 99.6952% ( 1) 00:13:58.790 7.436 - 7.467: 99.7012% ( 1) 00:13:58.790 7.710 - 7.741: 99.7072% ( 1) 00:13:58.790 7.802 - 7.863: 99.7131% ( 1) 00:13:58.790 7.985 - 8.046: 99.7191% ( 1) 00:13:58.790 8.290 - 8.350: 99.7251% ( 1) 00:13:58.790 8.533 - 8.594: 99.7370% ( 2) 00:13:58.790 8.655 - 8.716: 99.7430% ( 1) 00:13:58.790 8.960 - 9.021: 99.7490% ( 1) 00:13:58.790 9.143 - 9.204: 99.7550% ( 1) 00:13:58.790 9.509 - 9.570: 99.7669% ( 2) 00:13:58.790 9.691 - 9.752: 99.7729% ( 1) 00:13:58.790 10.423 - 10.484: 99.7789% ( 1) 00:13:58.790 10.545 - 10.606: 99.7908% ( 2) 00:13:58.790 10.971 - 11.032: 99.7968% ( 1) 00:13:58.790 13.531 - 13.592: 99.8028% ( 1) 00:13:58.790 13.714 - 13.775: 99.8088% ( 1) 00:13:58.790 13.958 - 14.019: 99.8147% ( 1) 00:13:58.790 16.091 - 16.213: 99.8207% ( 1) 00:13:58.790 18.651 - 18.773: 99.8267% ( 1) 00:13:58.790 18.773 - 18.895: 99.8386% ( 2) 00:13:58.790 19.017 - 19.139: 99.8506% ( 2) 00:13:58.790 19.383 - 19.505: 99.8566% ( 1) 00:13:58.790 19.627 - 19.749: 99.8625% ( 1) 00:13:58.790 3994.575 - 4025.783: 100.0000% ( 23) 00:13:58.790 00:13:58.790 Complete histogram 00:13:58.790 ================== 00:13:58.790 Range in us Cumulative Count 00:13:58.790 1.745 - 1.752: 0.1315% ( 22) 00:13:58.790 1.752 - 1.760: 0.7829% ( 109) 00:13:58.790 1.760 - 1.768: 2.4263% ( 275) 00:13:58.790 1.768 - 1.775: 3.8188% ( 233) 00:13:58.790 1.775 - 1.783: 4.4045% ( 98) 00:13:58.790 1.783 - 1.790: 4.6375% ( 39) 00:13:58.790 1.790 - 1.798: 5.2770% ( 107) 00:13:58.790 1.798 - 1.806: 10.0639% ( 801) 00:13:58.790 1.806 - 1.813: 29.8751% ( 3315) 00:13:58.790 1.813 - 1.821: 60.9634% ( 5202) 00:13:58.790 1.821 - 1.829: 81.8622% ( 3497) 00:13:58.790 1.829 - 1.836: 90.4679% ( 1440) 00:13:58.790 1.836 - 1.844: 94.1254% ( 612) 00:13:58.790 1.844 - 1.851: 95.9900% ( 312) 00:13:58.790 1.851 - 1.859: 96.9222% ( 156) 00:13:58.790 1.859 - 1.867: 97.1912% ( 45) 00:13:58.790 1.867 - 1.874: 97.3884% ( 33) 00:13:58.790 1.874 - 1.882: 97.7290% ( 57) 00:13:58.790 1.882 - 1.890: 98.0697% ( 57) 00:13:58.790 1.890 - 1.897: 98.4820% ( 69) 00:13:58.790 1.897 - 1.905: 98.7032% ( 37) 00:13:58.790 1.905 - 1.912: 98.7868% ( 14) 00:13:58.790 1.912 - 1.920: 98.8406% ( 9) 00:13:58.790 1.920 - 1.928: 98.8884% ( 8) 00:13:58.790 1.928 - 1.935: 98.9243% ( 6) 00:13:58.790 1.935 - 1.943: 99.0259% ( 17) 00:13:58.790 1.943 - 1.950: 99.1095% ( 14) 00:13:58.790 1.950 - 1.966: 99.1394% ( 5) 00:13:58.790 1.966 - 1.981: 99.1454% ( 1) 00:13:58.790 1.981 - 1.996: 99.1753% ( 5) 00:13:58.790 1.996 - 2.011: 99.1932% ( 3) 00:13:58.790 2.011 - 2.027: 99.2111% ( 3) 00:13:58.790 2.027 - 2.042: 99.2171% ( 1) 00:13:58.790 2.042 - 2.057: 99.2231% ( 1) 00:13:58.790 2.088 - 2.103: 99.2291% ( 1) 00:13:58.790 2.103 - 2.118: 99.2350% ( 1) 00:13:58.790 2.118 - 2.133: 99.2410% ( 1) 00:13:58.790 2.133 - 2.149: 99.2470% ( 1) 00:13:58.790 2.149 - 2.164: 99.2530% ( 1) 00:13:58.790 2.179 - 2.194: 99.2589% ( 1) 00:13:58.790 2.194 - 2.210: 99.2649% ( 1) 00:13:58.790 2.210 - 2.225: 99.2709% ( 1) 00:13:58.790 2.240 - 2.255: 99.2769% ( 1) 00:13:58.790 2.301 - 2.316: 99.2888% ( 2) 00:13:58.790 2.331 - 2.347: 99.2948% ( 1) 00:13:58.790 2.377 - 2.392: 99.3008% ( 1) 00:13:58.790 2.423 - 2.438: 99.3068% ( 1) 00:13:58.790 2.453 - 2.469: 99.3127% ( 1) 00:13:58.790 4.053 - 4.084: 99.3247% ( 2) 00:13:58.790 4.114 - 4.145: 99.3307% ( 1) 00:13:58.790 4.236 - 4.267: 99.3366% ( 1) 00:13:58.790 4.450 - 4.480: 99.3426% ( 1) 00:13:58.790 4.541 - 4.571: 99.3486% ( 1) 00:13:58.790 4.785 - 4.815: 99.3546% ( 1) 00:13:58.790 4.846 - 4.876: 99.3665% ( 2) 00:13:58.790 4.937 - 4.968: 99.3725% ( 1) 00:13:58.790 5.059 - 5.090: 99.3785% ( 1) 00:13:58.790 5.150 - 5.181: 99.3844% ( 1) 00:13:58.790 5.211 - 5.242: 99.3904% ( 1) 00:13:58.790 5.516 - 5.547: 99.3964% ( 1) 00:13:58.790 5.608 - 5.638: 99.4024% ( 1) 00:13:58.790 5.638 - 5.669: 99.4084% ( 1) 00:13:58.790 5.669 - 5.699: 99.4143% ( 1) 00:13:58.790 5.790 - 5.821: 99.4203% ( 1) 00:13:58.790 5.821 - 5.851: 99.4263% ( 1) 00:13:58.790 5.851 - 5.8[2024-10-15 12:54:19.076132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.066 82: 99.4323% ( 1) 00:13:59.066 5.912 - 5.943: 99.4382% ( 1) 00:13:59.066 6.004 - 6.034: 99.4442% ( 1) 00:13:59.066 6.034 - 6.065: 99.4502% ( 1) 00:13:59.066 6.156 - 6.187: 99.4562% ( 1) 00:13:59.066 6.309 - 6.339: 99.4741% ( 3) 00:13:59.066 6.370 - 6.400: 99.4801% ( 1) 00:13:59.066 6.613 - 6.644: 99.4860% ( 1) 00:13:59.066 6.674 - 6.705: 99.4920% ( 1) 00:13:59.066 6.705 - 6.735: 99.4980% ( 1) 00:13:59.066 6.766 - 6.796: 99.5040% ( 1) 00:13:59.066 6.918 - 6.949: 99.5100% ( 1) 00:13:59.066 7.131 - 7.162: 99.5159% ( 1) 00:13:59.066 7.345 - 7.375: 99.5219% ( 1) 00:13:59.066 7.375 - 7.406: 99.5279% ( 1) 00:13:59.066 7.436 - 7.467: 99.5339% ( 1) 00:13:59.066 7.863 - 7.924: 99.5398% ( 1) 00:13:59.066 7.985 - 8.046: 99.5458% ( 1) 00:13:59.066 11.642 - 11.703: 99.5518% ( 1) 00:13:59.066 12.008 - 12.069: 99.5578% ( 1) 00:13:59.066 17.554 - 17.676: 99.5637% ( 1) 00:13:59.066 2995.931 - 3011.535: 99.5697% ( 1) 00:13:59.066 3994.575 - 4025.783: 99.9940% ( 71) 00:13:59.066 4993.219 - 5024.427: 100.0000% ( 1) 00:13:59.066 00:13:59.066 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:59.066 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:59.066 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:59.066 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:59.066 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:59.066 [ 00:13:59.066 { 00:13:59.066 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:59.066 "subtype": "Discovery", 00:13:59.066 "listen_addresses": [], 00:13:59.066 "allow_any_host": true, 00:13:59.066 "hosts": [] 00:13:59.066 }, 00:13:59.066 { 00:13:59.066 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:59.066 "subtype": "NVMe", 00:13:59.066 "listen_addresses": [ 00:13:59.066 { 00:13:59.066 "trtype": "VFIOUSER", 00:13:59.066 "adrfam": "IPv4", 00:13:59.066 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:59.066 "trsvcid": "0" 00:13:59.067 } 00:13:59.067 ], 00:13:59.067 "allow_any_host": true, 00:13:59.067 "hosts": [], 00:13:59.067 "serial_number": "SPDK1", 00:13:59.067 "model_number": "SPDK bdev Controller", 00:13:59.067 "max_namespaces": 32, 00:13:59.067 "min_cntlid": 1, 00:13:59.067 "max_cntlid": 65519, 00:13:59.067 "namespaces": [ 00:13:59.067 { 00:13:59.067 "nsid": 1, 00:13:59.067 "bdev_name": "Malloc1", 00:13:59.067 "name": "Malloc1", 00:13:59.067 "nguid": "D067CDB417104F27BC02BF1C02FD8779", 00:13:59.067 "uuid": "d067cdb4-1710-4f27-bc02-bf1c02fd8779" 00:13:59.067 } 00:13:59.067 ] 00:13:59.067 }, 00:13:59.067 { 00:13:59.067 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:59.067 "subtype": "NVMe", 00:13:59.067 "listen_addresses": [ 00:13:59.067 { 00:13:59.067 "trtype": "VFIOUSER", 00:13:59.067 "adrfam": "IPv4", 00:13:59.067 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:59.067 "trsvcid": "0" 00:13:59.067 } 00:13:59.067 ], 00:13:59.067 "allow_any_host": true, 00:13:59.067 "hosts": [], 00:13:59.067 "serial_number": "SPDK2", 00:13:59.067 "model_number": "SPDK bdev Controller", 00:13:59.067 "max_namespaces": 32, 00:13:59.067 "min_cntlid": 1, 00:13:59.067 "max_cntlid": 65519, 00:13:59.067 "namespaces": [ 00:13:59.067 { 00:13:59.067 "nsid": 1, 00:13:59.067 "bdev_name": "Malloc2", 00:13:59.067 "name": "Malloc2", 00:13:59.067 "nguid": "AF5E05740C69498A8DBF47D32FEF25C7", 00:13:59.067 "uuid": "af5e0574-0c69-498a-8dbf-47d32fef25c7" 00:13:59.067 } 00:13:59.067 ] 00:13:59.067 } 00:13:59.067 ] 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1183554 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:59.067 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:59.326 [2024-10-15 12:54:19.459047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.326 Malloc3 00:13:59.326 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:59.585 [2024-10-15 12:54:19.714931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.585 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:59.585 Asynchronous Event Request test 00:13:59.585 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.585 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.585 Registering asynchronous event callbacks... 00:13:59.585 Starting namespace attribute notice tests for all controllers... 00:13:59.585 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:59.585 aer_cb - Changed Namespace 00:13:59.585 Cleaning up... 00:13:59.846 [ 00:13:59.846 { 00:13:59.846 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:59.846 "subtype": "Discovery", 00:13:59.846 "listen_addresses": [], 00:13:59.846 "allow_any_host": true, 00:13:59.846 "hosts": [] 00:13:59.846 }, 00:13:59.846 { 00:13:59.846 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:59.846 "subtype": "NVMe", 00:13:59.846 "listen_addresses": [ 00:13:59.846 { 00:13:59.846 "trtype": "VFIOUSER", 00:13:59.846 "adrfam": "IPv4", 00:13:59.846 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:59.846 "trsvcid": "0" 00:13:59.846 } 00:13:59.846 ], 00:13:59.846 "allow_any_host": true, 00:13:59.846 "hosts": [], 00:13:59.846 "serial_number": "SPDK1", 00:13:59.846 "model_number": "SPDK bdev Controller", 00:13:59.846 "max_namespaces": 32, 00:13:59.846 "min_cntlid": 1, 00:13:59.846 "max_cntlid": 65519, 00:13:59.846 "namespaces": [ 00:13:59.846 { 00:13:59.846 "nsid": 1, 00:13:59.846 "bdev_name": "Malloc1", 00:13:59.846 "name": "Malloc1", 00:13:59.846 "nguid": "D067CDB417104F27BC02BF1C02FD8779", 00:13:59.846 "uuid": "d067cdb4-1710-4f27-bc02-bf1c02fd8779" 00:13:59.846 }, 00:13:59.846 { 00:13:59.846 "nsid": 2, 00:13:59.846 "bdev_name": "Malloc3", 00:13:59.846 "name": "Malloc3", 00:13:59.846 "nguid": "D0567A4BA9BF4C4094B35B76B7F6871E", 00:13:59.846 "uuid": "d0567a4b-a9bf-4c40-94b3-5b76b7f6871e" 00:13:59.846 } 00:13:59.846 ] 00:13:59.846 }, 00:13:59.846 { 00:13:59.846 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:59.846 "subtype": "NVMe", 00:13:59.846 "listen_addresses": [ 00:13:59.846 { 00:13:59.846 "trtype": "VFIOUSER", 00:13:59.846 "adrfam": "IPv4", 00:13:59.846 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:59.846 "trsvcid": "0" 00:13:59.846 } 00:13:59.846 ], 00:13:59.846 "allow_any_host": true, 00:13:59.846 "hosts": [], 00:13:59.846 "serial_number": "SPDK2", 00:13:59.846 "model_number": "SPDK bdev Controller", 00:13:59.846 "max_namespaces": 32, 00:13:59.846 "min_cntlid": 1, 00:13:59.846 "max_cntlid": 65519, 00:13:59.846 "namespaces": [ 00:13:59.846 { 00:13:59.846 "nsid": 1, 00:13:59.846 "bdev_name": "Malloc2", 00:13:59.846 "name": "Malloc2", 00:13:59.846 "nguid": "AF5E05740C69498A8DBF47D32FEF25C7", 00:13:59.846 "uuid": "af5e0574-0c69-498a-8dbf-47d32fef25c7" 00:13:59.846 } 00:13:59.846 ] 00:13:59.846 } 00:13:59.846 ] 00:13:59.846 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1183554 00:13:59.846 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.846 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:59.846 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:59.846 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:59.846 [2024-10-15 12:54:19.961064] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:13:59.846 [2024-10-15 12:54:19.961112] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183618 ] 00:13:59.846 [2024-10-15 12:54:19.988788] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:59.846 [2024-10-15 12:54:19.998835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:59.846 [2024-10-15 12:54:19.998857] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fef01985000 00:13:59.846 [2024-10-15 12:54:19.999834] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.000838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.001853] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.002864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.003875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.004879] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.005881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.006889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:59.846 [2024-10-15 12:54:20.007907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:59.846 [2024-10-15 12:54:20.007920] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fef0197a000 00:13:59.846 [2024-10-15 12:54:20.008841] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:59.846 [2024-10-15 12:54:20.022176] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:59.846 [2024-10-15 12:54:20.022202] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:59.846 [2024-10-15 12:54:20.025607] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:59.846 [2024-10-15 12:54:20.025645] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:59.846 [2024-10-15 12:54:20.025716] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:59.846 [2024-10-15 12:54:20.025733] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:59.846 [2024-10-15 12:54:20.025739] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:59.846 [2024-10-15 12:54:20.026270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:59.847 [2024-10-15 12:54:20.026280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:59.847 [2024-10-15 12:54:20.026288] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:59.847 [2024-10-15 12:54:20.027281] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:59.847 [2024-10-15 12:54:20.027292] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:59.847 [2024-10-15 12:54:20.027300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.028285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:59.847 [2024-10-15 12:54:20.028295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.029292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:59.847 [2024-10-15 12:54:20.029300] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:59.847 [2024-10-15 12:54:20.029305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.029311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.029417] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:59.847 [2024-10-15 12:54:20.029423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.029429] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:59.847 [2024-10-15 12:54:20.030303] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:59.847 [2024-10-15 12:54:20.031313] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:59.847 [2024-10-15 12:54:20.032320] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:59.847 [2024-10-15 12:54:20.033328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:59.847 [2024-10-15 12:54:20.033370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:59.847 [2024-10-15 12:54:20.034340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:59.847 [2024-10-15 12:54:20.034350] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:59.847 [2024-10-15 12:54:20.034355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.034373] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:59.847 [2024-10-15 12:54:20.034383] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.034396] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:59.847 [2024-10-15 12:54:20.034400] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:59.847 [2024-10-15 12:54:20.034404] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.847 [2024-10-15 12:54:20.034415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.043610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.043623] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:59.847 [2024-10-15 12:54:20.043628] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:59.847 [2024-10-15 12:54:20.043631] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:59.847 [2024-10-15 12:54:20.043635] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:59.847 [2024-10-15 12:54:20.043640] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:59.847 [2024-10-15 12:54:20.043644] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:59.847 [2024-10-15 12:54:20.043648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.043655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.043665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.051608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.051623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.847 [2024-10-15 12:54:20.051631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.847 [2024-10-15 12:54:20.051639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.847 [2024-10-15 12:54:20.051646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.847 [2024-10-15 12:54:20.051653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.051659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.051667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.059610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.059619] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:59.847 [2024-10-15 12:54:20.059626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.059633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.059638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.059646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.067666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.067674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.067682] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:59.847 [2024-10-15 12:54:20.067686] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:59.847 [2024-10-15 12:54:20.067689] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.847 [2024-10-15 12:54:20.067696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.075608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.075621] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:59.847 [2024-10-15 12:54:20.075629] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.075637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.075643] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:59.847 [2024-10-15 12:54:20.075647] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:59.847 [2024-10-15 12:54:20.075650] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.847 [2024-10-15 12:54:20.075656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.083607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.083625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.083632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.083638] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:59.847 [2024-10-15 12:54:20.083642] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:59.847 [2024-10-15 12:54:20.083645] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.847 [2024-10-15 12:54:20.083651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:59.847 [2024-10-15 12:54:20.091609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:59.847 [2024-10-15 12:54:20.091619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.091625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:59.847 [2024-10-15 12:54:20.091633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:59.848 [2024-10-15 12:54:20.091638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:59.848 [2024-10-15 12:54:20.091643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:59.848 [2024-10-15 12:54:20.091648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:59.848 [2024-10-15 12:54:20.091652] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:59.848 [2024-10-15 12:54:20.091657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:59.848 [2024-10-15 12:54:20.091662] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:59.848 [2024-10-15 12:54:20.091677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.099607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.099620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.107609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.107622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.115609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.115622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.123608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.123627] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:59.848 [2024-10-15 12:54:20.123632] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:59.848 [2024-10-15 12:54:20.123637] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:59.848 [2024-10-15 12:54:20.123640] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:59.848 [2024-10-15 12:54:20.123643] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:59.848 [2024-10-15 12:54:20.123649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:59.848 [2024-10-15 12:54:20.123656] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:59.848 [2024-10-15 12:54:20.123660] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:59.848 [2024-10-15 12:54:20.123663] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.848 [2024-10-15 12:54:20.123668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.123674] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:59.848 [2024-10-15 12:54:20.123685] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:59.848 [2024-10-15 12:54:20.123688] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.848 [2024-10-15 12:54:20.123693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.123702] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:59.848 [2024-10-15 12:54:20.123706] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:59.848 [2024-10-15 12:54:20.123709] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:59.848 [2024-10-15 12:54:20.123715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:59.848 [2024-10-15 12:54:20.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.131626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.131638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:59.848 [2024-10-15 12:54:20.131647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:59.848 ===================================================== 00:13:59.848 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:59.848 ===================================================== 00:13:59.848 Controller Capabilities/Features 00:13:59.848 ================================ 00:13:59.848 Vendor ID: 4e58 00:13:59.848 Subsystem Vendor ID: 4e58 00:13:59.848 Serial Number: SPDK2 00:13:59.848 Model Number: SPDK bdev Controller 00:13:59.848 Firmware Version: 25.01 00:13:59.848 Recommended Arb Burst: 6 00:13:59.848 IEEE OUI Identifier: 8d 6b 50 00:13:59.848 Multi-path I/O 00:13:59.848 May have multiple subsystem ports: Yes 00:13:59.848 May have multiple controllers: Yes 00:13:59.848 Associated with SR-IOV VF: No 00:13:59.848 Max Data Transfer Size: 131072 00:13:59.848 Max Number of Namespaces: 32 00:13:59.848 Max Number of I/O Queues: 127 00:13:59.848 NVMe Specification Version (VS): 1.3 00:13:59.848 NVMe Specification Version (Identify): 1.3 00:13:59.848 Maximum Queue Entries: 256 00:13:59.848 Contiguous Queues Required: Yes 00:13:59.848 Arbitration Mechanisms Supported 00:13:59.848 Weighted Round Robin: Not Supported 00:13:59.848 Vendor Specific: Not Supported 00:13:59.848 Reset Timeout: 15000 ms 00:13:59.848 Doorbell Stride: 4 bytes 00:13:59.848 NVM Subsystem Reset: Not Supported 00:13:59.848 Command Sets Supported 00:13:59.848 NVM Command Set: Supported 00:13:59.848 Boot Partition: Not Supported 00:13:59.848 Memory Page Size Minimum: 4096 bytes 00:13:59.848 Memory Page Size Maximum: 4096 bytes 00:13:59.848 Persistent Memory Region: Not Supported 00:13:59.848 Optional Asynchronous Events Supported 00:13:59.848 Namespace Attribute Notices: Supported 00:13:59.848 Firmware Activation Notices: Not Supported 00:13:59.848 ANA Change Notices: Not Supported 00:13:59.848 PLE Aggregate Log Change Notices: Not Supported 00:13:59.848 LBA Status Info Alert Notices: Not Supported 00:13:59.848 EGE Aggregate Log Change Notices: Not Supported 00:13:59.848 Normal NVM Subsystem Shutdown event: Not Supported 00:13:59.848 Zone Descriptor Change Notices: Not Supported 00:13:59.848 Discovery Log Change Notices: Not Supported 00:13:59.848 Controller Attributes 00:13:59.848 128-bit Host Identifier: Supported 00:13:59.848 Non-Operational Permissive Mode: Not Supported 00:13:59.848 NVM Sets: Not Supported 00:13:59.848 Read Recovery Levels: Not Supported 00:13:59.848 Endurance Groups: Not Supported 00:13:59.848 Predictable Latency Mode: Not Supported 00:13:59.848 Traffic Based Keep ALive: Not Supported 00:13:59.848 Namespace Granularity: Not Supported 00:13:59.848 SQ Associations: Not Supported 00:13:59.848 UUID List: Not Supported 00:13:59.848 Multi-Domain Subsystem: Not Supported 00:13:59.848 Fixed Capacity Management: Not Supported 00:13:59.848 Variable Capacity Management: Not Supported 00:13:59.848 Delete Endurance Group: Not Supported 00:13:59.848 Delete NVM Set: Not Supported 00:13:59.848 Extended LBA Formats Supported: Not Supported 00:13:59.848 Flexible Data Placement Supported: Not Supported 00:13:59.848 00:13:59.848 Controller Memory Buffer Support 00:13:59.848 ================================ 00:13:59.848 Supported: No 00:13:59.848 00:13:59.848 Persistent Memory Region Support 00:13:59.848 ================================ 00:13:59.848 Supported: No 00:13:59.848 00:13:59.848 Admin Command Set Attributes 00:13:59.848 ============================ 00:13:59.848 Security Send/Receive: Not Supported 00:13:59.848 Format NVM: Not Supported 00:13:59.848 Firmware Activate/Download: Not Supported 00:13:59.848 Namespace Management: Not Supported 00:13:59.848 Device Self-Test: Not Supported 00:13:59.848 Directives: Not Supported 00:13:59.848 NVMe-MI: Not Supported 00:13:59.848 Virtualization Management: Not Supported 00:13:59.848 Doorbell Buffer Config: Not Supported 00:13:59.848 Get LBA Status Capability: Not Supported 00:13:59.848 Command & Feature Lockdown Capability: Not Supported 00:13:59.848 Abort Command Limit: 4 00:13:59.848 Async Event Request Limit: 4 00:13:59.848 Number of Firmware Slots: N/A 00:13:59.848 Firmware Slot 1 Read-Only: N/A 00:13:59.848 Firmware Activation Without Reset: N/A 00:13:59.848 Multiple Update Detection Support: N/A 00:13:59.848 Firmware Update Granularity: No Information Provided 00:13:59.848 Per-Namespace SMART Log: No 00:13:59.848 Asymmetric Namespace Access Log Page: Not Supported 00:13:59.848 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:59.849 Command Effects Log Page: Supported 00:13:59.849 Get Log Page Extended Data: Supported 00:13:59.849 Telemetry Log Pages: Not Supported 00:13:59.849 Persistent Event Log Pages: Not Supported 00:13:59.849 Supported Log Pages Log Page: May Support 00:13:59.849 Commands Supported & Effects Log Page: Not Supported 00:13:59.849 Feature Identifiers & Effects Log Page:May Support 00:13:59.849 NVMe-MI Commands & Effects Log Page: May Support 00:13:59.849 Data Area 4 for Telemetry Log: Not Supported 00:13:59.849 Error Log Page Entries Supported: 128 00:13:59.849 Keep Alive: Supported 00:13:59.849 Keep Alive Granularity: 10000 ms 00:13:59.849 00:13:59.849 NVM Command Set Attributes 00:13:59.849 ========================== 00:13:59.849 Submission Queue Entry Size 00:13:59.849 Max: 64 00:13:59.849 Min: 64 00:13:59.849 Completion Queue Entry Size 00:13:59.849 Max: 16 00:13:59.849 Min: 16 00:13:59.849 Number of Namespaces: 32 00:13:59.849 Compare Command: Supported 00:13:59.849 Write Uncorrectable Command: Not Supported 00:13:59.849 Dataset Management Command: Supported 00:13:59.849 Write Zeroes Command: Supported 00:13:59.849 Set Features Save Field: Not Supported 00:13:59.849 Reservations: Not Supported 00:13:59.849 Timestamp: Not Supported 00:13:59.849 Copy: Supported 00:13:59.849 Volatile Write Cache: Present 00:13:59.849 Atomic Write Unit (Normal): 1 00:13:59.849 Atomic Write Unit (PFail): 1 00:13:59.849 Atomic Compare & Write Unit: 1 00:13:59.849 Fused Compare & Write: Supported 00:13:59.849 Scatter-Gather List 00:13:59.849 SGL Command Set: Supported (Dword aligned) 00:13:59.849 SGL Keyed: Not Supported 00:13:59.849 SGL Bit Bucket Descriptor: Not Supported 00:13:59.849 SGL Metadata Pointer: Not Supported 00:13:59.849 Oversized SGL: Not Supported 00:13:59.849 SGL Metadata Address: Not Supported 00:13:59.849 SGL Offset: Not Supported 00:13:59.849 Transport SGL Data Block: Not Supported 00:13:59.849 Replay Protected Memory Block: Not Supported 00:13:59.849 00:13:59.849 Firmware Slot Information 00:13:59.849 ========================= 00:13:59.849 Active slot: 1 00:13:59.849 Slot 1 Firmware Revision: 25.01 00:13:59.849 00:13:59.849 00:13:59.849 Commands Supported and Effects 00:13:59.849 ============================== 00:13:59.849 Admin Commands 00:13:59.849 -------------- 00:13:59.849 Get Log Page (02h): Supported 00:13:59.849 Identify (06h): Supported 00:13:59.849 Abort (08h): Supported 00:13:59.849 Set Features (09h): Supported 00:13:59.849 Get Features (0Ah): Supported 00:13:59.849 Asynchronous Event Request (0Ch): Supported 00:13:59.849 Keep Alive (18h): Supported 00:13:59.849 I/O Commands 00:13:59.849 ------------ 00:13:59.849 Flush (00h): Supported LBA-Change 00:13:59.849 Write (01h): Supported LBA-Change 00:13:59.849 Read (02h): Supported 00:13:59.849 Compare (05h): Supported 00:13:59.849 Write Zeroes (08h): Supported LBA-Change 00:13:59.849 Dataset Management (09h): Supported LBA-Change 00:13:59.849 Copy (19h): Supported LBA-Change 00:13:59.849 00:13:59.849 Error Log 00:13:59.849 ========= 00:13:59.849 00:13:59.849 Arbitration 00:13:59.849 =========== 00:13:59.849 Arbitration Burst: 1 00:13:59.849 00:13:59.849 Power Management 00:13:59.849 ================ 00:13:59.849 Number of Power States: 1 00:13:59.849 Current Power State: Power State #0 00:13:59.849 Power State #0: 00:13:59.849 Max Power: 0.00 W 00:13:59.849 Non-Operational State: Operational 00:13:59.849 Entry Latency: Not Reported 00:13:59.849 Exit Latency: Not Reported 00:13:59.849 Relative Read Throughput: 0 00:13:59.849 Relative Read Latency: 0 00:13:59.849 Relative Write Throughput: 0 00:13:59.849 Relative Write Latency: 0 00:13:59.849 Idle Power: Not Reported 00:13:59.849 Active Power: Not Reported 00:13:59.849 Non-Operational Permissive Mode: Not Supported 00:13:59.849 00:13:59.849 Health Information 00:13:59.849 ================== 00:13:59.849 Critical Warnings: 00:13:59.849 Available Spare Space: OK 00:13:59.849 Temperature: OK 00:13:59.849 Device Reliability: OK 00:13:59.849 Read Only: No 00:13:59.849 Volatile Memory Backup: OK 00:13:59.849 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:59.849 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:59.849 Available Spare: 0% 00:13:59.849 Available Sp[2024-10-15 12:54:20.131733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:59.849 [2024-10-15 12:54:20.139609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:59.849 [2024-10-15 12:54:20.139638] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:59.849 [2024-10-15 12:54:20.139646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.849 [2024-10-15 12:54:20.139652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.849 [2024-10-15 12:54:20.139658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.849 [2024-10-15 12:54:20.139663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.849 [2024-10-15 12:54:20.139705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:59.849 [2024-10-15 12:54:20.139719] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:59.849 [2024-10-15 12:54:20.140708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:59.849 [2024-10-15 12:54:20.140754] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:59.849 [2024-10-15 12:54:20.140761] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:59.849 [2024-10-15 12:54:20.141713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:59.849 [2024-10-15 12:54:20.141724] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:59.849 [2024-10-15 12:54:20.141795] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:59.849 [2024-10-15 12:54:20.142753] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:00.109 are Threshold: 0% 00:14:00.109 Life Percentage Used: 0% 00:14:00.109 Data Units Read: 0 00:14:00.109 Data Units Written: 0 00:14:00.109 Host Read Commands: 0 00:14:00.109 Host Write Commands: 0 00:14:00.109 Controller Busy Time: 0 minutes 00:14:00.109 Power Cycles: 0 00:14:00.109 Power On Hours: 0 hours 00:14:00.109 Unsafe Shutdowns: 0 00:14:00.109 Unrecoverable Media Errors: 0 00:14:00.109 Lifetime Error Log Entries: 0 00:14:00.109 Warning Temperature Time: 0 minutes 00:14:00.109 Critical Temperature Time: 0 minutes 00:14:00.109 00:14:00.109 Number of Queues 00:14:00.109 ================ 00:14:00.109 Number of I/O Submission Queues: 127 00:14:00.109 Number of I/O Completion Queues: 127 00:14:00.109 00:14:00.109 Active Namespaces 00:14:00.109 ================= 00:14:00.109 Namespace ID:1 00:14:00.109 Error Recovery Timeout: Unlimited 00:14:00.109 Command Set Identifier: NVM (00h) 00:14:00.109 Deallocate: Supported 00:14:00.109 Deallocated/Unwritten Error: Not Supported 00:14:00.109 Deallocated Read Value: Unknown 00:14:00.109 Deallocate in Write Zeroes: Not Supported 00:14:00.109 Deallocated Guard Field: 0xFFFF 00:14:00.109 Flush: Supported 00:14:00.109 Reservation: Supported 00:14:00.109 Namespace Sharing Capabilities: Multiple Controllers 00:14:00.109 Size (in LBAs): 131072 (0GiB) 00:14:00.109 Capacity (in LBAs): 131072 (0GiB) 00:14:00.109 Utilization (in LBAs): 131072 (0GiB) 00:14:00.109 NGUID: AF5E05740C69498A8DBF47D32FEF25C7 00:14:00.109 UUID: af5e0574-0c69-498a-8dbf-47d32fef25c7 00:14:00.109 Thin Provisioning: Not Supported 00:14:00.109 Per-NS Atomic Units: Yes 00:14:00.109 Atomic Boundary Size (Normal): 0 00:14:00.109 Atomic Boundary Size (PFail): 0 00:14:00.109 Atomic Boundary Offset: 0 00:14:00.109 Maximum Single Source Range Length: 65535 00:14:00.109 Maximum Copy Length: 65535 00:14:00.109 Maximum Source Range Count: 1 00:14:00.109 NGUID/EUI64 Never Reused: No 00:14:00.109 Namespace Write Protected: No 00:14:00.109 Number of LBA Formats: 1 00:14:00.109 Current LBA Format: LBA Format #00 00:14:00.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:00.109 00:14:00.109 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:00.109 [2024-10-15 12:54:20.361852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.381 Initializing NVMe Controllers 00:14:05.381 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:05.381 Initialization complete. Launching workers. 00:14:05.381 ======================================================== 00:14:05.381 Latency(us) 00:14:05.381 Device Information : IOPS MiB/s Average min max 00:14:05.381 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39941.36 156.02 3204.29 933.90 6666.61 00:14:05.381 ======================================================== 00:14:05.381 Total : 39941.36 156.02 3204.29 933.90 6666.61 00:14:05.381 00:14:05.381 [2024-10-15 12:54:25.466864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.381 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:05.381 [2024-10-15 12:54:25.689552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.656 Initializing NVMe Controllers 00:14:10.656 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:10.656 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:10.656 Initialization complete. Launching workers. 00:14:10.656 ======================================================== 00:14:10.656 Latency(us) 00:14:10.656 Device Information : IOPS MiB/s Average min max 00:14:10.656 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39926.51 155.96 3205.71 947.99 10626.94 00:14:10.656 ======================================================== 00:14:10.656 Total : 39926.51 155.96 3205.71 947.99 10626.94 00:14:10.656 00:14:10.656 [2024-10-15 12:54:30.710565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.656 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:10.656 [2024-10-15 12:54:30.901777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:15.927 [2024-10-15 12:54:36.031917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.927 Initializing NVMe Controllers 00:14:15.927 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:15.927 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:15.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:15.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:15.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:15.927 Initialization complete. Launching workers. 00:14:15.927 Starting thread on core 2 00:14:15.927 Starting thread on core 3 00:14:15.927 Starting thread on core 1 00:14:15.927 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:16.186 [2024-10-15 12:54:36.308616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:19.474 [2024-10-15 12:54:39.374087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:19.474 Initializing NVMe Controllers 00:14:19.474 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.474 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.474 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:19.474 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:19.474 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:19.474 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:19.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:19.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:19.474 Initialization complete. Launching workers. 00:14:19.474 Starting thread on core 1 with urgent priority queue 00:14:19.474 Starting thread on core 2 with urgent priority queue 00:14:19.474 Starting thread on core 3 with urgent priority queue 00:14:19.474 Starting thread on core 0 with urgent priority queue 00:14:19.474 SPDK bdev Controller (SPDK2 ) core 0: 7682.33 IO/s 13.02 secs/100000 ios 00:14:19.474 SPDK bdev Controller (SPDK2 ) core 1: 7267.00 IO/s 13.76 secs/100000 ios 00:14:19.474 SPDK bdev Controller (SPDK2 ) core 2: 9792.67 IO/s 10.21 secs/100000 ios 00:14:19.474 SPDK bdev Controller (SPDK2 ) core 3: 7546.67 IO/s 13.25 secs/100000 ios 00:14:19.474 ======================================================== 00:14:19.474 00:14:19.474 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:19.474 [2024-10-15 12:54:39.649075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:19.474 Initializing NVMe Controllers 00:14:19.474 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.474 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.474 Namespace ID: 1 size: 0GB 00:14:19.474 Initialization complete. 00:14:19.474 INFO: using host memory buffer for IO 00:14:19.474 Hello world! 00:14:19.474 [2024-10-15 12:54:39.662173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:19.474 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:19.733 [2024-10-15 12:54:39.931309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.112 Initializing NVMe Controllers 00:14:21.112 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.112 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.112 Initialization complete. Launching workers. 00:14:21.112 submit (in ns) avg, min, max = 5326.2, 3195.2, 3999594.3 00:14:21.112 complete (in ns) avg, min, max = 22451.2, 1775.2, 3999706.7 00:14:21.112 00:14:21.112 Submit histogram 00:14:21.112 ================ 00:14:21.112 Range in us Cumulative Count 00:14:21.112 3.185 - 3.200: 0.0059% ( 1) 00:14:21.112 3.200 - 3.215: 0.0297% ( 4) 00:14:21.112 3.215 - 3.230: 0.1842% ( 26) 00:14:21.112 3.230 - 3.246: 0.6298% ( 75) 00:14:21.112 3.246 - 3.261: 1.7885% ( 195) 00:14:21.112 3.261 - 3.276: 4.4920% ( 455) 00:14:21.112 3.276 - 3.291: 9.7802% ( 890) 00:14:21.112 3.291 - 3.307: 15.6150% ( 982) 00:14:21.112 3.307 - 3.322: 22.3054% ( 1126) 00:14:21.112 3.322 - 3.337: 28.8948% ( 1109) 00:14:21.112 3.337 - 3.352: 34.5217% ( 947) 00:14:21.112 3.352 - 3.368: 40.5585% ( 1016) 00:14:21.112 3.368 - 3.383: 46.1913% ( 948) 00:14:21.112 3.383 - 3.398: 52.0143% ( 980) 00:14:21.112 3.398 - 3.413: 57.0945% ( 855) 00:14:21.112 3.413 - 3.429: 62.9590% ( 987) 00:14:21.112 3.429 - 3.444: 70.1545% ( 1211) 00:14:21.112 3.444 - 3.459: 75.0267% ( 820) 00:14:21.112 3.459 - 3.474: 79.8217% ( 807) 00:14:21.112 3.474 - 3.490: 82.7392% ( 491) 00:14:21.112 3.490 - 3.505: 84.8010% ( 347) 00:14:21.112 3.505 - 3.520: 86.1913% ( 234) 00:14:21.112 3.520 - 3.535: 86.9875% ( 134) 00:14:21.112 3.535 - 3.550: 87.4688% ( 81) 00:14:21.112 3.550 - 3.566: 87.9204% ( 76) 00:14:21.112 3.566 - 3.581: 88.6215% ( 118) 00:14:21.112 3.581 - 3.596: 89.5425% ( 155) 00:14:21.112 3.596 - 3.611: 90.4337% ( 150) 00:14:21.112 3.611 - 3.627: 91.3844% ( 160) 00:14:21.112 3.627 - 3.642: 92.3886% ( 169) 00:14:21.112 3.642 - 3.657: 93.3274% ( 158) 00:14:21.112 3.657 - 3.672: 94.1652% ( 141) 00:14:21.112 3.672 - 3.688: 95.1753% ( 170) 00:14:21.112 3.688 - 3.703: 96.1200% ( 159) 00:14:21.112 3.703 - 3.718: 96.9519% ( 140) 00:14:21.112 3.718 - 3.733: 97.6411% ( 116) 00:14:21.112 3.733 - 3.749: 98.1283% ( 82) 00:14:21.112 3.749 - 3.764: 98.5621% ( 73) 00:14:21.112 3.764 - 3.779: 98.9780% ( 70) 00:14:21.112 3.779 - 3.794: 99.2157% ( 40) 00:14:21.112 3.794 - 3.810: 99.3523% ( 23) 00:14:21.112 3.810 - 3.825: 99.4593% ( 18) 00:14:21.112 3.825 - 3.840: 99.5306% ( 12) 00:14:21.112 3.840 - 3.855: 99.5663% ( 6) 00:14:21.112 3.855 - 3.870: 99.5960% ( 5) 00:14:21.112 3.870 - 3.886: 99.6019% ( 1) 00:14:21.112 3.886 - 3.901: 99.6078% ( 1) 00:14:21.112 3.901 - 3.931: 99.6138% ( 1) 00:14:21.112 3.931 - 3.962: 99.6197% ( 1) 00:14:21.112 3.962 - 3.992: 99.6257% ( 1) 00:14:21.112 5.090 - 5.120: 99.6316% ( 1) 00:14:21.112 5.303 - 5.333: 99.6376% ( 1) 00:14:21.112 5.364 - 5.394: 99.6435% ( 1) 00:14:21.112 5.394 - 5.425: 99.6494% ( 1) 00:14:21.112 5.455 - 5.486: 99.6554% ( 1) 00:14:21.112 5.547 - 5.577: 99.6673% ( 2) 00:14:21.112 5.608 - 5.638: 99.6732% ( 1) 00:14:21.112 5.638 - 5.669: 99.6791% ( 1) 00:14:21.112 5.760 - 5.790: 99.6851% ( 1) 00:14:21.112 5.790 - 5.821: 99.6910% ( 1) 00:14:21.112 6.004 - 6.034: 99.6970% ( 1) 00:14:21.112 6.034 - 6.065: 99.7089% ( 2) 00:14:21.112 6.065 - 6.095: 99.7148% ( 1) 00:14:21.112 6.187 - 6.217: 99.7207% ( 1) 00:14:21.112 6.248 - 6.278: 99.7267% ( 1) 00:14:21.112 6.430 - 6.461: 99.7326% ( 1) 00:14:21.112 6.461 - 6.491: 99.7386% ( 1) 00:14:21.112 6.491 - 6.522: 99.7445% ( 1) 00:14:21.112 6.522 - 6.552: 99.7504% ( 1) 00:14:21.112 6.766 - 6.796: 99.7564% ( 1) 00:14:21.112 6.827 - 6.857: 99.7623% ( 1) 00:14:21.112 6.979 - 7.010: 99.7683% ( 1) 00:14:21.112 7.040 - 7.070: 99.7742% ( 1) 00:14:21.112 7.070 - 7.101: 99.7802% ( 1) 00:14:21.112 7.131 - 7.162: 99.7861% ( 1) 00:14:21.112 7.253 - 7.284: 99.7920% ( 1) 00:14:21.112 7.284 - 7.314: 99.8039% ( 2) 00:14:21.112 7.375 - 7.406: 99.8099% ( 1) 00:14:21.112 7.497 - 7.528: 99.8158% ( 1) 00:14:21.112 [2024-10-15 12:54:41.024607] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.112 7.741 - 7.771: 99.8217% ( 1) 00:14:21.112 7.771 - 7.802: 99.8277% ( 1) 00:14:21.112 7.802 - 7.863: 99.8336% ( 1) 00:14:21.112 7.863 - 7.924: 99.8455% ( 2) 00:14:21.112 7.985 - 8.046: 99.8574% ( 2) 00:14:21.112 8.168 - 8.229: 99.8633% ( 1) 00:14:21.112 8.290 - 8.350: 99.8693% ( 1) 00:14:21.112 8.350 - 8.411: 99.8812% ( 2) 00:14:21.112 8.594 - 8.655: 99.8871% ( 1) 00:14:21.112 8.655 - 8.716: 99.8930% ( 1) 00:14:21.112 8.716 - 8.777: 99.8990% ( 1) 00:14:21.112 8.838 - 8.899: 99.9049% ( 1) 00:14:21.112 9.021 - 9.082: 99.9109% ( 1) 00:14:21.112 9.509 - 9.570: 99.9168% ( 1) 00:14:21.112 10.118 - 10.179: 99.9228% ( 1) 00:14:21.112 11.642 - 11.703: 99.9287% ( 1) 00:14:21.112 12.373 - 12.434: 99.9346% ( 1) 00:14:21.112 12.983 - 13.044: 99.9406% ( 1) 00:14:21.112 15.482 - 15.543: 99.9465% ( 1) 00:14:21.112 15.543 - 15.604: 99.9525% ( 1) 00:14:21.112 3994.575 - 4025.783: 100.0000% ( 8) 00:14:21.112 00:14:21.112 Complete histogram 00:14:21.112 ================== 00:14:21.112 Range in us Cumulative Count 00:14:21.112 1.775 - 1.783: 0.2139% ( 36) 00:14:21.112 1.783 - 1.790: 3.8087% ( 605) 00:14:21.112 1.790 - 1.798: 18.1996% ( 2422) 00:14:21.112 1.798 - 1.806: 36.3518% ( 3055) 00:14:21.112 1.806 - 1.813: 52.4777% ( 2714) 00:14:21.112 1.813 - 1.821: 71.3131% ( 3170) 00:14:21.112 1.821 - 1.829: 85.2050% ( 2338) 00:14:21.112 1.829 - 1.836: 91.2953% ( 1025) 00:14:21.112 1.836 - 1.844: 94.5157% ( 542) 00:14:21.112 1.844 - 1.851: 97.0707% ( 430) 00:14:21.112 1.851 - 1.859: 98.1343% ( 179) 00:14:21.112 1.859 - 1.867: 98.5680% ( 73) 00:14:21.112 1.867 - 1.874: 98.7819% ( 36) 00:14:21.112 1.874 - 1.882: 98.8770% ( 16) 00:14:21.112 1.882 - 1.890: 98.9542% ( 13) 00:14:21.112 1.890 - 1.897: 99.0374% ( 14) 00:14:21.112 1.897 - 1.905: 99.0671% ( 5) 00:14:21.112 1.905 - 1.912: 99.1147% ( 8) 00:14:21.112 1.912 - 1.920: 99.1563% ( 7) 00:14:21.112 1.920 - 1.928: 99.1860% ( 5) 00:14:21.112 1.928 - 1.935: 99.2216% ( 6) 00:14:21.112 1.943 - 1.950: 99.2395% ( 3) 00:14:21.112 1.950 - 1.966: 99.2454% ( 1) 00:14:21.112 1.966 - 1.981: 99.2692% ( 4) 00:14:21.112 1.981 - 1.996: 99.2751% ( 1) 00:14:21.112 1.996 - 2.011: 99.2810% ( 1) 00:14:21.112 2.011 - 2.027: 99.2989% ( 3) 00:14:21.112 2.057 - 2.072: 99.3048% ( 1) 00:14:21.112 2.301 - 2.316: 99.3108% ( 1) 00:14:21.112 3.627 - 3.642: 99.3167% ( 1) 00:14:21.112 3.794 - 3.810: 99.3226% ( 1) 00:14:21.112 3.901 - 3.931: 99.3286% ( 1) 00:14:21.112 4.297 - 4.328: 99.3345% ( 1) 00:14:21.112 4.389 - 4.419: 99.3405% ( 1) 00:14:21.112 4.571 - 4.602: 99.3464% ( 1) 00:14:21.112 4.663 - 4.693: 99.3523% ( 1) 00:14:21.112 4.907 - 4.937: 99.3642% ( 2) 00:14:21.112 5.242 - 5.272: 99.3702% ( 1) 00:14:21.112 5.272 - 5.303: 99.3761% ( 1) 00:14:21.112 5.333 - 5.364: 99.3880% ( 2) 00:14:21.112 5.425 - 5.455: 99.3939% ( 1) 00:14:21.112 5.455 - 5.486: 99.3999% ( 1) 00:14:21.112 5.669 - 5.699: 99.4058% ( 1) 00:14:21.112 5.699 - 5.730: 99.4118% ( 1) 00:14:21.113 6.095 - 6.126: 99.4177% ( 1) 00:14:21.113 6.156 - 6.187: 99.4236% ( 1) 00:14:21.113 6.339 - 6.370: 99.4296% ( 1) 00:14:21.113 6.491 - 6.522: 99.4355% ( 1) 00:14:21.113 6.552 - 6.583: 99.4415% ( 1) 00:14:21.113 6.644 - 6.674: 99.4474% ( 1) 00:14:21.113 6.674 - 6.705: 99.4534% ( 1) 00:14:21.113 7.406 - 7.436: 99.4593% ( 1) 00:14:21.113 7.619 - 7.650: 99.4652% ( 1) 00:14:21.113 7.802 - 7.863: 99.4712% ( 1) 00:14:21.113 7.863 - 7.924: 99.4771% ( 1) 00:14:21.113 7.985 - 8.046: 99.4831% ( 1) 00:14:21.113 3651.291 - 3666.895: 99.4890% ( 1) 00:14:21.113 3994.575 - 4025.783: 100.0000% ( 86) 00:14:21.113 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:21.113 [ 00:14:21.113 { 00:14:21.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:21.113 "subtype": "Discovery", 00:14:21.113 "listen_addresses": [], 00:14:21.113 "allow_any_host": true, 00:14:21.113 "hosts": [] 00:14:21.113 }, 00:14:21.113 { 00:14:21.113 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:21.113 "subtype": "NVMe", 00:14:21.113 "listen_addresses": [ 00:14:21.113 { 00:14:21.113 "trtype": "VFIOUSER", 00:14:21.113 "adrfam": "IPv4", 00:14:21.113 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:21.113 "trsvcid": "0" 00:14:21.113 } 00:14:21.113 ], 00:14:21.113 "allow_any_host": true, 00:14:21.113 "hosts": [], 00:14:21.113 "serial_number": "SPDK1", 00:14:21.113 "model_number": "SPDK bdev Controller", 00:14:21.113 "max_namespaces": 32, 00:14:21.113 "min_cntlid": 1, 00:14:21.113 "max_cntlid": 65519, 00:14:21.113 "namespaces": [ 00:14:21.113 { 00:14:21.113 "nsid": 1, 00:14:21.113 "bdev_name": "Malloc1", 00:14:21.113 "name": "Malloc1", 00:14:21.113 "nguid": "D067CDB417104F27BC02BF1C02FD8779", 00:14:21.113 "uuid": "d067cdb4-1710-4f27-bc02-bf1c02fd8779" 00:14:21.113 }, 00:14:21.113 { 00:14:21.113 "nsid": 2, 00:14:21.113 "bdev_name": "Malloc3", 00:14:21.113 "name": "Malloc3", 00:14:21.113 "nguid": "D0567A4BA9BF4C4094B35B76B7F6871E", 00:14:21.113 "uuid": "d0567a4b-a9bf-4c40-94b3-5b76b7f6871e" 00:14:21.113 } 00:14:21.113 ] 00:14:21.113 }, 00:14:21.113 { 00:14:21.113 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:21.113 "subtype": "NVMe", 00:14:21.113 "listen_addresses": [ 00:14:21.113 { 00:14:21.113 "trtype": "VFIOUSER", 00:14:21.113 "adrfam": "IPv4", 00:14:21.113 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:21.113 "trsvcid": "0" 00:14:21.113 } 00:14:21.113 ], 00:14:21.113 "allow_any_host": true, 00:14:21.113 "hosts": [], 00:14:21.113 "serial_number": "SPDK2", 00:14:21.113 "model_number": "SPDK bdev Controller", 00:14:21.113 "max_namespaces": 32, 00:14:21.113 "min_cntlid": 1, 00:14:21.113 "max_cntlid": 65519, 00:14:21.113 "namespaces": [ 00:14:21.113 { 00:14:21.113 "nsid": 1, 00:14:21.113 "bdev_name": "Malloc2", 00:14:21.113 "name": "Malloc2", 00:14:21.113 "nguid": "AF5E05740C69498A8DBF47D32FEF25C7", 00:14:21.113 "uuid": "af5e0574-0c69-498a-8dbf-47d32fef25c7" 00:14:21.113 } 00:14:21.113 ] 00:14:21.113 } 00:14:21.113 ] 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1187234 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:21.113 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:21.113 [2024-10-15 12:54:41.428064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.372 Malloc4 00:14:21.372 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:21.372 [2024-10-15 12:54:41.662795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.372 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:21.632 Asynchronous Event Request test 00:14:21.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.632 Registering asynchronous event callbacks... 00:14:21.632 Starting namespace attribute notice tests for all controllers... 00:14:21.632 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:21.632 aer_cb - Changed Namespace 00:14:21.632 Cleaning up... 00:14:21.632 [ 00:14:21.632 { 00:14:21.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:21.632 "subtype": "Discovery", 00:14:21.632 "listen_addresses": [], 00:14:21.632 "allow_any_host": true, 00:14:21.632 "hosts": [] 00:14:21.632 }, 00:14:21.632 { 00:14:21.632 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:21.632 "subtype": "NVMe", 00:14:21.632 "listen_addresses": [ 00:14:21.632 { 00:14:21.632 "trtype": "VFIOUSER", 00:14:21.632 "adrfam": "IPv4", 00:14:21.632 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:21.632 "trsvcid": "0" 00:14:21.632 } 00:14:21.632 ], 00:14:21.632 "allow_any_host": true, 00:14:21.632 "hosts": [], 00:14:21.632 "serial_number": "SPDK1", 00:14:21.632 "model_number": "SPDK bdev Controller", 00:14:21.632 "max_namespaces": 32, 00:14:21.632 "min_cntlid": 1, 00:14:21.632 "max_cntlid": 65519, 00:14:21.632 "namespaces": [ 00:14:21.632 { 00:14:21.632 "nsid": 1, 00:14:21.632 "bdev_name": "Malloc1", 00:14:21.632 "name": "Malloc1", 00:14:21.632 "nguid": "D067CDB417104F27BC02BF1C02FD8779", 00:14:21.632 "uuid": "d067cdb4-1710-4f27-bc02-bf1c02fd8779" 00:14:21.632 }, 00:14:21.632 { 00:14:21.632 "nsid": 2, 00:14:21.632 "bdev_name": "Malloc3", 00:14:21.632 "name": "Malloc3", 00:14:21.632 "nguid": "D0567A4BA9BF4C4094B35B76B7F6871E", 00:14:21.632 "uuid": "d0567a4b-a9bf-4c40-94b3-5b76b7f6871e" 00:14:21.632 } 00:14:21.632 ] 00:14:21.632 }, 00:14:21.632 { 00:14:21.632 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:21.632 "subtype": "NVMe", 00:14:21.632 "listen_addresses": [ 00:14:21.632 { 00:14:21.632 "trtype": "VFIOUSER", 00:14:21.632 "adrfam": "IPv4", 00:14:21.632 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:21.632 "trsvcid": "0" 00:14:21.632 } 00:14:21.632 ], 00:14:21.632 "allow_any_host": true, 00:14:21.632 "hosts": [], 00:14:21.632 "serial_number": "SPDK2", 00:14:21.632 "model_number": "SPDK bdev Controller", 00:14:21.632 "max_namespaces": 32, 00:14:21.632 "min_cntlid": 1, 00:14:21.632 "max_cntlid": 65519, 00:14:21.632 "namespaces": [ 00:14:21.632 { 00:14:21.632 "nsid": 1, 00:14:21.632 "bdev_name": "Malloc2", 00:14:21.632 "name": "Malloc2", 00:14:21.632 "nguid": "AF5E05740C69498A8DBF47D32FEF25C7", 00:14:21.632 "uuid": "af5e0574-0c69-498a-8dbf-47d32fef25c7" 00:14:21.632 }, 00:14:21.632 { 00:14:21.632 "nsid": 2, 00:14:21.632 "bdev_name": "Malloc4", 00:14:21.632 "name": "Malloc4", 00:14:21.632 "nguid": "E7EC0039390147729B6B4E4E98C8131D", 00:14:21.632 "uuid": "e7ec0039-3901-4772-9b6b-4e4e98c8131d" 00:14:21.632 } 00:14:21.632 ] 00:14:21.632 } 00:14:21.632 ] 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1187234 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1179087 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1179087 ']' 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1179087 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1179087 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1179087' 00:14:21.632 killing process with pid 1179087 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1179087 00:14:21.632 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1179087 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1187257 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1187257' 00:14:21.892 Process pid: 1187257 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1187257 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1187257 ']' 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.892 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:22.151 [2024-10-15 12:54:42.219242] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:22.151 [2024-10-15 12:54:42.220129] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:14:22.151 [2024-10-15 12:54:42.220163] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.151 [2024-10-15 12:54:42.285837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.151 [2024-10-15 12:54:42.328126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.151 [2024-10-15 12:54:42.328162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.151 [2024-10-15 12:54:42.328170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.151 [2024-10-15 12:54:42.328175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.151 [2024-10-15 12:54:42.328181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.151 [2024-10-15 12:54:42.329726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.151 [2024-10-15 12:54:42.329835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.151 [2024-10-15 12:54:42.329946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.151 [2024-10-15 12:54:42.329947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.151 [2024-10-15 12:54:42.396419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:22.151 [2024-10-15 12:54:42.397652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:22.151 [2024-10-15 12:54:42.397837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:22.151 [2024-10-15 12:54:42.398289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:22.151 [2024-10-15 12:54:42.398341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:22.151 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.151 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:22.151 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:23.529 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:23.529 Malloc1 00:14:23.789 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:23.789 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:24.050 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:24.309 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.309 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:24.309 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:24.309 Malloc2 00:14:24.568 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:24.568 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:24.826 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1187257 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1187257 ']' 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1187257 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1187257 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1187257' 00:14:25.086 killing process with pid 1187257 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1187257 00:14:25.086 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1187257 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:25.345 00:14:25.345 real 0m50.566s 00:14:25.345 user 3m15.694s 00:14:25.345 sys 0m3.177s 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:25.345 ************************************ 00:14:25.345 END TEST nvmf_vfio_user 00:14:25.345 ************************************ 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.345 ************************************ 00:14:25.345 START TEST nvmf_vfio_user_nvme_compliance 00:14:25.345 ************************************ 00:14:25.345 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:25.345 * Looking for test storage... 00:14:25.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.604 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:25.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.605 --rc genhtml_branch_coverage=1 00:14:25.605 --rc genhtml_function_coverage=1 00:14:25.605 --rc genhtml_legend=1 00:14:25.605 --rc geninfo_all_blocks=1 00:14:25.605 --rc geninfo_unexecuted_blocks=1 00:14:25.605 00:14:25.605 ' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:25.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.605 --rc genhtml_branch_coverage=1 00:14:25.605 --rc genhtml_function_coverage=1 00:14:25.605 --rc genhtml_legend=1 00:14:25.605 --rc geninfo_all_blocks=1 00:14:25.605 --rc geninfo_unexecuted_blocks=1 00:14:25.605 00:14:25.605 ' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:25.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.605 --rc genhtml_branch_coverage=1 00:14:25.605 --rc genhtml_function_coverage=1 00:14:25.605 --rc genhtml_legend=1 00:14:25.605 --rc geninfo_all_blocks=1 00:14:25.605 --rc geninfo_unexecuted_blocks=1 00:14:25.605 00:14:25.605 ' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:25.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.605 --rc genhtml_branch_coverage=1 00:14:25.605 --rc genhtml_function_coverage=1 00:14:25.605 --rc genhtml_legend=1 00:14:25.605 --rc geninfo_all_blocks=1 00:14:25.605 --rc geninfo_unexecuted_blocks=1 00:14:25.605 00:14:25.605 ' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1188020 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1188020' 00:14:25.605 Process pid: 1188020 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1188020 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1188020 ']' 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.605 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:25.605 [2024-10-15 12:54:45.843165] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:14:25.605 [2024-10-15 12:54:45.843212] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.605 [2024-10-15 12:54:45.909289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.864 [2024-10-15 12:54:45.950708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.864 [2024-10-15 12:54:45.950745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.864 [2024-10-15 12:54:45.950752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.864 [2024-10-15 12:54:45.950759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.864 [2024-10-15 12:54:45.950764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.864 [2024-10-15 12:54:45.952047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.864 [2024-10-15 12:54:45.952158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.864 [2024-10-15 12:54:45.952159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.865 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.865 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:14:25.865 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:26.802 malloc0 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.802 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:27.061 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.061 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:27.061 00:14:27.061 00:14:27.061 CUnit - A unit testing framework for C - Version 2.1-3 00:14:27.061 http://cunit.sourceforge.net/ 00:14:27.061 00:14:27.061 00:14:27.061 Suite: nvme_compliance 00:14:27.061 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-15 12:54:47.271043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.061 [2024-10-15 12:54:47.272386] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:27.061 [2024-10-15 12:54:47.272401] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:27.061 [2024-10-15 12:54:47.272407] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:27.061 [2024-10-15 12:54:47.276071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.061 passed 00:14:27.061 Test: admin_identify_ctrlr_verify_fused ...[2024-10-15 12:54:47.352667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.061 [2024-10-15 12:54:47.355690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.320 passed 00:14:27.320 Test: admin_identify_ns ...[2024-10-15 12:54:47.434851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.320 [2024-10-15 12:54:47.498614] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:27.320 [2024-10-15 12:54:47.506612] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:27.320 [2024-10-15 12:54:47.527699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.320 passed 00:14:27.320 Test: admin_get_features_mandatory_features ...[2024-10-15 12:54:47.600385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.320 [2024-10-15 12:54:47.605416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.320 passed 00:14:27.578 Test: admin_get_features_optional_features ...[2024-10-15 12:54:47.680937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.578 [2024-10-15 12:54:47.684963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.578 passed 00:14:27.578 Test: admin_set_features_number_of_queues ...[2024-10-15 12:54:47.762951] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.578 [2024-10-15 12:54:47.868689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.579 passed 00:14:27.844 Test: admin_get_log_page_mandatory_logs ...[2024-10-15 12:54:47.942365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.844 [2024-10-15 12:54:47.945389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.844 passed 00:14:27.844 Test: admin_get_log_page_with_lpo ...[2024-10-15 12:54:48.024940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.844 [2024-10-15 12:54:48.088611] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:27.844 [2024-10-15 12:54:48.101678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:27.844 passed 00:14:28.103 Test: fabric_property_get ...[2024-10-15 12:54:48.177242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.103 [2024-10-15 12:54:48.178496] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:28.103 [2024-10-15 12:54:48.182268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.103 passed 00:14:28.103 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-15 12:54:48.257760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.103 [2024-10-15 12:54:48.258980] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:28.103 [2024-10-15 12:54:48.260775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.103 passed 00:14:28.103 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-15 12:54:48.336479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.103 [2024-10-15 12:54:48.419605] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:28.362 [2024-10-15 12:54:48.435608] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:28.362 [2024-10-15 12:54:48.440681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.362 passed 00:14:28.362 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-15 12:54:48.517194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.362 [2024-10-15 12:54:48.518422] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:28.362 [2024-10-15 12:54:48.520210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.362 passed 00:14:28.362 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-15 12:54:48.597988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.362 [2024-10-15 12:54:48.674613] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:28.620 [2024-10-15 12:54:48.698611] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:28.620 [2024-10-15 12:54:48.703701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.620 passed 00:14:28.620 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-15 12:54:48.780482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.621 [2024-10-15 12:54:48.781735] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:28.621 [2024-10-15 12:54:48.781759] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:28.621 [2024-10-15 12:54:48.783502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.621 passed 00:14:28.621 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-15 12:54:48.859188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.879 [2024-10-15 12:54:48.950611] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:28.879 [2024-10-15 12:54:48.958605] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:28.879 [2024-10-15 12:54:48.966637] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:28.879 [2024-10-15 12:54:48.974606] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:28.879 [2024-10-15 12:54:49.003693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.879 passed 00:14:28.879 Test: admin_create_io_sq_verify_pc ...[2024-10-15 12:54:49.080294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.879 [2024-10-15 12:54:49.096615] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:28.879 [2024-10-15 12:54:49.114484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.879 passed 00:14:28.879 Test: admin_create_io_qp_max_qps ...[2024-10-15 12:54:49.193041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.256 [2024-10-15 12:54:50.307609] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:30.515 [2024-10-15 12:54:50.687793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.515 passed 00:14:30.515 Test: admin_create_io_sq_shared_cq ...[2024-10-15 12:54:50.763849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.774 [2024-10-15 12:54:50.902608] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:30.774 [2024-10-15 12:54:50.939695] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.774 passed 00:14:30.774 00:14:30.774 Run Summary: Type Total Ran Passed Failed Inactive 00:14:30.774 suites 1 1 n/a 0 0 00:14:30.774 tests 18 18 18 0 0 00:14:30.774 asserts 360 360 360 0 n/a 00:14:30.774 00:14:30.774 Elapsed time = 1.507 seconds 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1188020 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1188020 ']' 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1188020 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.774 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1188020 00:14:30.774 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.774 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.774 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1188020' 00:14:30.774 killing process with pid 1188020 00:14:30.775 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1188020 00:14:30.775 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1188020 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:31.034 00:14:31.034 real 0m5.635s 00:14:31.034 user 0m15.762s 00:14:31.034 sys 0m0.508s 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.034 ************************************ 00:14:31.034 END TEST nvmf_vfio_user_nvme_compliance 00:14:31.034 ************************************ 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.034 ************************************ 00:14:31.034 START TEST nvmf_vfio_user_fuzz 00:14:31.034 ************************************ 00:14:31.034 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:31.294 * Looking for test storage... 00:14:31.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.294 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:31.294 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:31.294 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:31.294 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:31.294 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.295 --rc genhtml_branch_coverage=1 00:14:31.295 --rc genhtml_function_coverage=1 00:14:31.295 --rc genhtml_legend=1 00:14:31.295 --rc geninfo_all_blocks=1 00:14:31.295 --rc geninfo_unexecuted_blocks=1 00:14:31.295 00:14:31.295 ' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.295 --rc genhtml_branch_coverage=1 00:14:31.295 --rc genhtml_function_coverage=1 00:14:31.295 --rc genhtml_legend=1 00:14:31.295 --rc geninfo_all_blocks=1 00:14:31.295 --rc geninfo_unexecuted_blocks=1 00:14:31.295 00:14:31.295 ' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.295 --rc genhtml_branch_coverage=1 00:14:31.295 --rc genhtml_function_coverage=1 00:14:31.295 --rc genhtml_legend=1 00:14:31.295 --rc geninfo_all_blocks=1 00:14:31.295 --rc geninfo_unexecuted_blocks=1 00:14:31.295 00:14:31.295 ' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.295 --rc genhtml_branch_coverage=1 00:14:31.295 --rc genhtml_function_coverage=1 00:14:31.295 --rc genhtml_legend=1 00:14:31.295 --rc geninfo_all_blocks=1 00:14:31.295 --rc geninfo_unexecuted_blocks=1 00:14:31.295 00:14:31.295 ' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1188999 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1188999' 00:14:31.295 Process pid: 1188999 00:14:31.295 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1188999 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1188999 ']' 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.296 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:31.554 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.554 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:14:31.554 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 malloc0 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.491 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:32.750 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.750 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:32.750 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:04.827 Fuzzing completed. Shutting down the fuzz application 00:15:04.827 00:15:04.827 Dumping successful admin opcodes: 00:15:04.827 8, 9, 10, 24, 00:15:04.827 Dumping successful io opcodes: 00:15:04.827 0, 00:15:04.827 NS: 0x20000081ef00 I/O qp, Total commands completed: 1028851, total successful commands: 4051, random_seed: 217485568 00:15:04.827 NS: 0x20000081ef00 admin qp, Total commands completed: 254522, total successful commands: 2054, random_seed: 4159842496 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1188999 ']' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1188999' 00:15:04.827 killing process with pid 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1188999 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:04.827 00:15:04.827 real 0m32.199s 00:15:04.827 user 0m30.707s 00:15:04.827 sys 0m30.200s 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.827 ************************************ 00:15:04.827 END TEST nvmf_vfio_user_fuzz 00:15:04.827 ************************************ 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.827 ************************************ 00:15:04.827 START TEST nvmf_auth_target 00:15:04.827 ************************************ 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:04.827 * Looking for test storage... 00:15:04.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:04.827 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.828 --rc genhtml_branch_coverage=1 00:15:04.828 --rc genhtml_function_coverage=1 00:15:04.828 --rc genhtml_legend=1 00:15:04.828 --rc geninfo_all_blocks=1 00:15:04.828 --rc geninfo_unexecuted_blocks=1 00:15:04.828 00:15:04.828 ' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.828 --rc genhtml_branch_coverage=1 00:15:04.828 --rc genhtml_function_coverage=1 00:15:04.828 --rc genhtml_legend=1 00:15:04.828 --rc geninfo_all_blocks=1 00:15:04.828 --rc geninfo_unexecuted_blocks=1 00:15:04.828 00:15:04.828 ' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.828 --rc genhtml_branch_coverage=1 00:15:04.828 --rc genhtml_function_coverage=1 00:15:04.828 --rc genhtml_legend=1 00:15:04.828 --rc geninfo_all_blocks=1 00:15:04.828 --rc geninfo_unexecuted_blocks=1 00:15:04.828 00:15:04.828 ' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.828 --rc genhtml_branch_coverage=1 00:15:04.828 --rc genhtml_function_coverage=1 00:15:04.828 --rc genhtml_legend=1 00:15:04.828 --rc geninfo_all_blocks=1 00:15:04.828 --rc geninfo_unexecuted_blocks=1 00:15:04.828 00:15:04.828 ' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:04.828 12:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.106 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:10.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:10.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:10.107 Found net devices under 0000:86:00.0: cvl_0_0 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:10.107 Found net devices under 0000:86:00.1: cvl_0_1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:15:10.107 00:15:10.107 --- 10.0.0.2 ping statistics --- 00:15:10.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.107 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:15:10.107 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:15:10.107 00:15:10.107 --- 10.0.0.1 ping statistics --- 00:15:10.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.107 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1197300 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1197300 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1197300 ']' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.108 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1197457 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6b584b87a4326b35226cbb29e45791fe362e72c3bf276644 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.C5Y 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6b584b87a4326b35226cbb29e45791fe362e72c3bf276644 0 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6b584b87a4326b35226cbb29e45791fe362e72c3bf276644 0 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6b584b87a4326b35226cbb29e45791fe362e72c3bf276644 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.C5Y 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.C5Y 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.C5Y 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3d0bc190a36e679e9537e398b2688f06bb7077148ea2b200df5f7db18ee61b69 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.flG 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3d0bc190a36e679e9537e398b2688f06bb7077148ea2b200df5f7db18ee61b69 3 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3d0bc190a36e679e9537e398b2688f06bb7077148ea2b200df5f7db18ee61b69 3 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3d0bc190a36e679e9537e398b2688f06bb7077148ea2b200df5f7db18ee61b69 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.flG 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.flG 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.flG 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0f69de84911bcab01c806ae251bfe6e5 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.HzU 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0f69de84911bcab01c806ae251bfe6e5 1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0f69de84911bcab01c806ae251bfe6e5 1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0f69de84911bcab01c806ae251bfe6e5 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.HzU 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.HzU 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.HzU 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cd9a891fab74c4c66896041ad0715a6a011808fa76dd7f4f 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.8yL 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cd9a891fab74c4c66896041ad0715a6a011808fa76dd7f4f 2 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cd9a891fab74c4c66896041ad0715a6a011808fa76dd7f4f 2 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cd9a891fab74c4c66896041ad0715a6a011808fa76dd7f4f 00:15:10.108 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.8yL 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.8yL 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.8yL 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e3f96ca51cad8f5b65c610d41492072878333138d15104df 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ANo 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e3f96ca51cad8f5b65c610d41492072878333138d15104df 2 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e3f96ca51cad8f5b65c610d41492072878333138d15104df 2 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e3f96ca51cad8f5b65c610d41492072878333138d15104df 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ANo 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ANo 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ANo 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=537733506021583bd4b08ff9294ce972 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.San 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 537733506021583bd4b08ff9294ce972 1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 537733506021583bd4b08ff9294ce972 1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=537733506021583bd4b08ff9294ce972 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.San 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.San 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.San 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a072d6a085acdb0e85389ebe1370f8e97e2195a2e908e300d35d0f1d14597435 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.qWX 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a072d6a085acdb0e85389ebe1370f8e97e2195a2e908e300d35d0f1d14597435 3 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a072d6a085acdb0e85389ebe1370f8e97e2195a2e908e300d35d0f1d14597435 3 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a072d6a085acdb0e85389ebe1370f8e97e2195a2e908e300d35d0f1d14597435 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:10.109 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.qWX 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.qWX 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.qWX 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1197300 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1197300 ']' 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1197457 /var/tmp/host.sock 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1197457 ']' 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.369 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.C5Y 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.C5Y 00:15:10.628 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.C5Y 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.flG ]] 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.flG 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.flG 00:15:10.887 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.flG 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HzU 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HzU 00:15:11.146 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HzU 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.8yL ]] 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8yL 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8yL 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8yL 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ANo 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ANo 00:15:11.405 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ANo 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.San ]] 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.San 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.San 00:15:11.664 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.San 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qWX 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qWX 00:15:11.923 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qWX 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.183 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.442 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.442 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.701 { 00:15:12.701 "cntlid": 1, 00:15:12.701 "qid": 0, 00:15:12.701 "state": "enabled", 00:15:12.701 "thread": "nvmf_tgt_poll_group_000", 00:15:12.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:12.701 "listen_address": { 00:15:12.701 "trtype": "TCP", 00:15:12.701 "adrfam": "IPv4", 00:15:12.701 "traddr": "10.0.0.2", 00:15:12.701 "trsvcid": "4420" 00:15:12.701 }, 00:15:12.701 "peer_address": { 00:15:12.701 "trtype": "TCP", 00:15:12.701 "adrfam": "IPv4", 00:15:12.701 "traddr": "10.0.0.1", 00:15:12.701 "trsvcid": "39840" 00:15:12.701 }, 00:15:12.701 "auth": { 00:15:12.701 "state": "completed", 00:15:12.701 "digest": "sha256", 00:15:12.701 "dhgroup": "null" 00:15:12.701 } 00:15:12.701 } 00:15:12.701 ]' 00:15:12.701 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.701 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.701 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:12.960 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:13.526 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.785 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.044 00:15:14.044 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.044 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.044 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.302 { 00:15:14.302 "cntlid": 3, 00:15:14.302 "qid": 0, 00:15:14.302 "state": "enabled", 00:15:14.302 "thread": "nvmf_tgt_poll_group_000", 00:15:14.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:14.302 "listen_address": { 00:15:14.302 "trtype": "TCP", 00:15:14.302 "adrfam": "IPv4", 00:15:14.302 "traddr": "10.0.0.2", 00:15:14.302 "trsvcid": "4420" 00:15:14.302 }, 00:15:14.302 "peer_address": { 00:15:14.302 "trtype": "TCP", 00:15:14.302 "adrfam": "IPv4", 00:15:14.302 "traddr": "10.0.0.1", 00:15:14.302 "trsvcid": "36600" 00:15:14.302 }, 00:15:14.302 "auth": { 00:15:14.302 "state": "completed", 00:15:14.302 "digest": "sha256", 00:15:14.302 "dhgroup": "null" 00:15:14.302 } 00:15:14.302 } 00:15:14.302 ]' 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:14.302 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.561 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.561 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.561 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.561 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:14.561 12:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.292 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.578 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.578 12:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.836 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.836 { 00:15:15.836 "cntlid": 5, 00:15:15.836 "qid": 0, 00:15:15.836 "state": "enabled", 00:15:15.836 "thread": "nvmf_tgt_poll_group_000", 00:15:15.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:15.836 "listen_address": { 00:15:15.836 "trtype": "TCP", 00:15:15.836 "adrfam": "IPv4", 00:15:15.836 "traddr": "10.0.0.2", 00:15:15.836 "trsvcid": "4420" 00:15:15.836 }, 00:15:15.836 "peer_address": { 00:15:15.836 "trtype": "TCP", 00:15:15.836 "adrfam": "IPv4", 00:15:15.837 "traddr": "10.0.0.1", 00:15:15.837 "trsvcid": "36624" 00:15:15.837 }, 00:15:15.837 "auth": { 00:15:15.837 "state": "completed", 00:15:15.837 "digest": "sha256", 00:15:15.837 "dhgroup": "null" 00:15:15.837 } 00:15:15.837 } 00:15:15.837 ]' 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.837 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.095 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:16.095 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.664 12:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.923 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.182 00:15:17.182 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.182 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.182 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.441 { 00:15:17.441 "cntlid": 7, 00:15:17.441 "qid": 0, 00:15:17.441 "state": "enabled", 00:15:17.441 "thread": "nvmf_tgt_poll_group_000", 00:15:17.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:17.441 "listen_address": { 00:15:17.441 "trtype": "TCP", 00:15:17.441 "adrfam": "IPv4", 00:15:17.441 "traddr": "10.0.0.2", 00:15:17.441 "trsvcid": "4420" 00:15:17.441 }, 00:15:17.441 "peer_address": { 00:15:17.441 "trtype": "TCP", 00:15:17.441 "adrfam": "IPv4", 00:15:17.441 "traddr": "10.0.0.1", 00:15:17.441 "trsvcid": "36634" 00:15:17.441 }, 00:15:17.441 "auth": { 00:15:17.441 "state": "completed", 00:15:17.441 "digest": "sha256", 00:15:17.441 "dhgroup": "null" 00:15:17.441 } 00:15:17.441 } 00:15:17.441 ]' 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.441 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.700 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:17.700 12:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.265 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.524 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.783 00:15:18.783 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.783 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.783 12:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.043 { 00:15:19.043 "cntlid": 9, 00:15:19.043 "qid": 0, 00:15:19.043 "state": "enabled", 00:15:19.043 "thread": "nvmf_tgt_poll_group_000", 00:15:19.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:19.043 "listen_address": { 00:15:19.043 "trtype": "TCP", 00:15:19.043 "adrfam": "IPv4", 00:15:19.043 "traddr": "10.0.0.2", 00:15:19.043 "trsvcid": "4420" 00:15:19.043 }, 00:15:19.043 "peer_address": { 00:15:19.043 "trtype": "TCP", 00:15:19.043 "adrfam": "IPv4", 00:15:19.043 "traddr": "10.0.0.1", 00:15:19.043 "trsvcid": "36656" 00:15:19.043 }, 00:15:19.043 "auth": { 00:15:19.043 "state": "completed", 00:15:19.043 "digest": "sha256", 00:15:19.043 "dhgroup": "ffdhe2048" 00:15:19.043 } 00:15:19.043 } 00:15:19.043 ]' 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.043 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.301 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:19.301 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.867 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.126 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.385 00:15:20.385 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.385 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.385 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.644 { 00:15:20.644 "cntlid": 11, 00:15:20.644 "qid": 0, 00:15:20.644 "state": "enabled", 00:15:20.644 "thread": "nvmf_tgt_poll_group_000", 00:15:20.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:20.644 "listen_address": { 00:15:20.644 "trtype": "TCP", 00:15:20.644 "adrfam": "IPv4", 00:15:20.644 "traddr": "10.0.0.2", 00:15:20.644 "trsvcid": "4420" 00:15:20.644 }, 00:15:20.644 "peer_address": { 00:15:20.644 "trtype": "TCP", 00:15:20.644 "adrfam": "IPv4", 00:15:20.644 "traddr": "10.0.0.1", 00:15:20.644 "trsvcid": "36676" 00:15:20.644 }, 00:15:20.644 "auth": { 00:15:20.644 "state": "completed", 00:15:20.644 "digest": "sha256", 00:15:20.644 "dhgroup": "ffdhe2048" 00:15:20.644 } 00:15:20.644 } 00:15:20.644 ]' 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.644 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.903 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:20.903 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.469 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.728 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.987 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.987 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.246 { 00:15:22.246 "cntlid": 13, 00:15:22.246 "qid": 0, 00:15:22.246 "state": "enabled", 00:15:22.246 "thread": "nvmf_tgt_poll_group_000", 00:15:22.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:22.246 "listen_address": { 00:15:22.246 "trtype": "TCP", 00:15:22.246 "adrfam": "IPv4", 00:15:22.246 "traddr": "10.0.0.2", 00:15:22.246 "trsvcid": "4420" 00:15:22.246 }, 00:15:22.246 "peer_address": { 00:15:22.246 "trtype": "TCP", 00:15:22.246 "adrfam": "IPv4", 00:15:22.246 "traddr": "10.0.0.1", 00:15:22.246 "trsvcid": "36718" 00:15:22.246 }, 00:15:22.246 "auth": { 00:15:22.246 "state": "completed", 00:15:22.246 "digest": "sha256", 00:15:22.246 "dhgroup": "ffdhe2048" 00:15:22.246 } 00:15:22.246 } 00:15:22.246 ]' 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.246 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.505 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:22.505 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.073 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.331 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.591 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.591 { 00:15:23.591 "cntlid": 15, 00:15:23.591 "qid": 0, 00:15:23.591 "state": "enabled", 00:15:23.591 "thread": "nvmf_tgt_poll_group_000", 00:15:23.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:23.591 "listen_address": { 00:15:23.591 "trtype": "TCP", 00:15:23.591 "adrfam": "IPv4", 00:15:23.591 "traddr": "10.0.0.2", 00:15:23.591 "trsvcid": "4420" 00:15:23.591 }, 00:15:23.591 "peer_address": { 00:15:23.591 "trtype": "TCP", 00:15:23.591 "adrfam": "IPv4", 00:15:23.591 "traddr": "10.0.0.1", 00:15:23.591 "trsvcid": "54476" 00:15:23.591 }, 00:15:23.591 "auth": { 00:15:23.591 "state": "completed", 00:15:23.591 "digest": "sha256", 00:15:23.591 "dhgroup": "ffdhe2048" 00:15:23.591 } 00:15:23.591 } 00:15:23.591 ]' 00:15:23.591 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.850 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.850 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.850 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.850 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.850 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.850 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.850 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.108 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:24.108 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.674 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.675 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.933 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.933 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.933 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.933 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.933 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.192 { 00:15:25.192 "cntlid": 17, 00:15:25.192 "qid": 0, 00:15:25.192 "state": "enabled", 00:15:25.192 "thread": "nvmf_tgt_poll_group_000", 00:15:25.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:25.192 "listen_address": { 00:15:25.192 "trtype": "TCP", 00:15:25.192 "adrfam": "IPv4", 00:15:25.192 "traddr": "10.0.0.2", 00:15:25.192 "trsvcid": "4420" 00:15:25.192 }, 00:15:25.192 "peer_address": { 00:15:25.192 "trtype": "TCP", 00:15:25.192 "adrfam": "IPv4", 00:15:25.192 "traddr": "10.0.0.1", 00:15:25.192 "trsvcid": "54512" 00:15:25.192 }, 00:15:25.192 "auth": { 00:15:25.192 "state": "completed", 00:15:25.192 "digest": "sha256", 00:15:25.192 "dhgroup": "ffdhe3072" 00:15:25.192 } 00:15:25.192 } 00:15:25.192 ]' 00:15:25.192 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.452 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.711 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:25.711 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.279 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.538 00:15:26.796 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.796 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.796 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.796 { 00:15:26.796 "cntlid": 19, 00:15:26.796 "qid": 0, 00:15:26.796 "state": "enabled", 00:15:26.796 "thread": "nvmf_tgt_poll_group_000", 00:15:26.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:26.796 "listen_address": { 00:15:26.796 "trtype": "TCP", 00:15:26.796 "adrfam": "IPv4", 00:15:26.796 "traddr": "10.0.0.2", 00:15:26.796 "trsvcid": "4420" 00:15:26.796 }, 00:15:26.796 "peer_address": { 00:15:26.796 "trtype": "TCP", 00:15:26.796 "adrfam": "IPv4", 00:15:26.796 "traddr": "10.0.0.1", 00:15:26.796 "trsvcid": "54528" 00:15:26.796 }, 00:15:26.796 "auth": { 00:15:26.796 "state": "completed", 00:15:26.796 "digest": "sha256", 00:15:26.796 "dhgroup": "ffdhe3072" 00:15:26.796 } 00:15:26.796 } 00:15:26.796 ]' 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.796 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.054 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.054 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.054 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.054 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.054 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.313 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:27.313 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.879 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.879 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.138 00:15:28.138 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.138 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.138 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.397 { 00:15:28.397 "cntlid": 21, 00:15:28.397 "qid": 0, 00:15:28.397 "state": "enabled", 00:15:28.397 "thread": "nvmf_tgt_poll_group_000", 00:15:28.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:28.397 "listen_address": { 00:15:28.397 "trtype": "TCP", 00:15:28.397 "adrfam": "IPv4", 00:15:28.397 "traddr": "10.0.0.2", 00:15:28.397 "trsvcid": "4420" 00:15:28.397 }, 00:15:28.397 "peer_address": { 00:15:28.397 "trtype": "TCP", 00:15:28.397 "adrfam": "IPv4", 00:15:28.397 "traddr": "10.0.0.1", 00:15:28.397 "trsvcid": "54566" 00:15:28.397 }, 00:15:28.397 "auth": { 00:15:28.397 "state": "completed", 00:15:28.397 "digest": "sha256", 00:15:28.397 "dhgroup": "ffdhe3072" 00:15:28.397 } 00:15:28.397 } 00:15:28.397 ]' 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.397 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.656 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.656 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.656 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.656 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.656 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.915 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:28.915 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:29.483 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.484 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.743 00:15:29.743 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.743 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.743 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.002 { 00:15:30.002 "cntlid": 23, 00:15:30.002 "qid": 0, 00:15:30.002 "state": "enabled", 00:15:30.002 "thread": "nvmf_tgt_poll_group_000", 00:15:30.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:30.002 "listen_address": { 00:15:30.002 "trtype": "TCP", 00:15:30.002 "adrfam": "IPv4", 00:15:30.002 "traddr": "10.0.0.2", 00:15:30.002 "trsvcid": "4420" 00:15:30.002 }, 00:15:30.002 "peer_address": { 00:15:30.002 "trtype": "TCP", 00:15:30.002 "adrfam": "IPv4", 00:15:30.002 "traddr": "10.0.0.1", 00:15:30.002 "trsvcid": "54614" 00:15:30.002 }, 00:15:30.002 "auth": { 00:15:30.002 "state": "completed", 00:15:30.002 "digest": "sha256", 00:15:30.002 "dhgroup": "ffdhe3072" 00:15:30.002 } 00:15:30.002 } 00:15:30.002 ]' 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.002 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:30.261 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.829 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.088 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.347 00:15:31.347 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.347 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.347 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.605 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.605 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.606 { 00:15:31.606 "cntlid": 25, 00:15:31.606 "qid": 0, 00:15:31.606 "state": "enabled", 00:15:31.606 "thread": "nvmf_tgt_poll_group_000", 00:15:31.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:31.606 "listen_address": { 00:15:31.606 "trtype": "TCP", 00:15:31.606 "adrfam": "IPv4", 00:15:31.606 "traddr": "10.0.0.2", 00:15:31.606 "trsvcid": "4420" 00:15:31.606 }, 00:15:31.606 "peer_address": { 00:15:31.606 "trtype": "TCP", 00:15:31.606 "adrfam": "IPv4", 00:15:31.606 "traddr": "10.0.0.1", 00:15:31.606 "trsvcid": "54650" 00:15:31.606 }, 00:15:31.606 "auth": { 00:15:31.606 "state": "completed", 00:15:31.606 "digest": "sha256", 00:15:31.606 "dhgroup": "ffdhe4096" 00:15:31.606 } 00:15:31.606 } 00:15:31.606 ]' 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.606 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.865 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.865 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.865 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.865 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:31.865 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.433 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.692 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.951 00:15:32.951 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.951 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.951 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.209 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.210 { 00:15:33.210 "cntlid": 27, 00:15:33.210 "qid": 0, 00:15:33.210 "state": "enabled", 00:15:33.210 "thread": "nvmf_tgt_poll_group_000", 00:15:33.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:33.210 "listen_address": { 00:15:33.210 "trtype": "TCP", 00:15:33.210 "adrfam": "IPv4", 00:15:33.210 "traddr": "10.0.0.2", 00:15:33.210 "trsvcid": "4420" 00:15:33.210 }, 00:15:33.210 "peer_address": { 00:15:33.210 "trtype": "TCP", 00:15:33.210 "adrfam": "IPv4", 00:15:33.210 "traddr": "10.0.0.1", 00:15:33.210 "trsvcid": "54682" 00:15:33.210 }, 00:15:33.210 "auth": { 00:15:33.210 "state": "completed", 00:15:33.210 "digest": "sha256", 00:15:33.210 "dhgroup": "ffdhe4096" 00:15:33.210 } 00:15:33.210 } 00:15:33.210 ]' 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.210 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.468 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.468 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.468 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.468 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:33.468 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.036 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.294 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.295 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.553 00:15:34.553 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.553 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.553 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.812 { 00:15:34.812 "cntlid": 29, 00:15:34.812 "qid": 0, 00:15:34.812 "state": "enabled", 00:15:34.812 "thread": "nvmf_tgt_poll_group_000", 00:15:34.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:34.812 "listen_address": { 00:15:34.812 "trtype": "TCP", 00:15:34.812 "adrfam": "IPv4", 00:15:34.812 "traddr": "10.0.0.2", 00:15:34.812 "trsvcid": "4420" 00:15:34.812 }, 00:15:34.812 "peer_address": { 00:15:34.812 "trtype": "TCP", 00:15:34.812 "adrfam": "IPv4", 00:15:34.812 "traddr": "10.0.0.1", 00:15:34.812 "trsvcid": "51152" 00:15:34.812 }, 00:15:34.812 "auth": { 00:15:34.812 "state": "completed", 00:15:34.812 "digest": "sha256", 00:15:34.812 "dhgroup": "ffdhe4096" 00:15:34.812 } 00:15:34.812 } 00:15:34.812 ]' 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.812 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.070 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.070 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.070 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.070 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:35.070 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.637 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.895 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.896 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.154 00:15:36.154 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.154 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.154 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.413 { 00:15:36.413 "cntlid": 31, 00:15:36.413 "qid": 0, 00:15:36.413 "state": "enabled", 00:15:36.413 "thread": "nvmf_tgt_poll_group_000", 00:15:36.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:36.413 "listen_address": { 00:15:36.413 "trtype": "TCP", 00:15:36.413 "adrfam": "IPv4", 00:15:36.413 "traddr": "10.0.0.2", 00:15:36.413 "trsvcid": "4420" 00:15:36.413 }, 00:15:36.413 "peer_address": { 00:15:36.413 "trtype": "TCP", 00:15:36.413 "adrfam": "IPv4", 00:15:36.413 "traddr": "10.0.0.1", 00:15:36.413 "trsvcid": "51178" 00:15:36.413 }, 00:15:36.413 "auth": { 00:15:36.413 "state": "completed", 00:15:36.413 "digest": "sha256", 00:15:36.413 "dhgroup": "ffdhe4096" 00:15:36.413 } 00:15:36.413 } 00:15:36.413 ]' 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.413 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:36.672 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:37.239 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.239 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:37.239 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.239 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.497 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.497 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.497 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.498 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.070 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.070 { 00:15:38.070 "cntlid": 33, 00:15:38.070 "qid": 0, 00:15:38.070 "state": "enabled", 00:15:38.070 "thread": "nvmf_tgt_poll_group_000", 00:15:38.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:38.070 "listen_address": { 00:15:38.070 "trtype": "TCP", 00:15:38.070 "adrfam": "IPv4", 00:15:38.070 "traddr": "10.0.0.2", 00:15:38.070 "trsvcid": "4420" 00:15:38.070 }, 00:15:38.070 "peer_address": { 00:15:38.070 "trtype": "TCP", 00:15:38.070 "adrfam": "IPv4", 00:15:38.070 "traddr": "10.0.0.1", 00:15:38.070 "trsvcid": "51208" 00:15:38.070 }, 00:15:38.070 "auth": { 00:15:38.070 "state": "completed", 00:15:38.070 "digest": "sha256", 00:15:38.070 "dhgroup": "ffdhe6144" 00:15:38.070 } 00:15:38.070 } 00:15:38.070 ]' 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.070 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.329 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.329 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.329 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.329 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.329 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.588 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:38.588 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.156 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.724 00:15:39.724 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.724 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.724 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.724 { 00:15:39.724 "cntlid": 35, 00:15:39.724 "qid": 0, 00:15:39.724 "state": "enabled", 00:15:39.724 "thread": "nvmf_tgt_poll_group_000", 00:15:39.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:39.724 "listen_address": { 00:15:39.724 "trtype": "TCP", 00:15:39.724 "adrfam": "IPv4", 00:15:39.724 "traddr": "10.0.0.2", 00:15:39.724 "trsvcid": "4420" 00:15:39.724 }, 00:15:39.724 "peer_address": { 00:15:39.724 "trtype": "TCP", 00:15:39.724 "adrfam": "IPv4", 00:15:39.724 "traddr": "10.0.0.1", 00:15:39.724 "trsvcid": "51242" 00:15:39.724 }, 00:15:39.724 "auth": { 00:15:39.724 "state": "completed", 00:15:39.724 "digest": "sha256", 00:15:39.724 "dhgroup": "ffdhe6144" 00:15:39.724 } 00:15:39.724 } 00:15:39.724 ]' 00:15:39.724 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.982 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.240 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:40.240 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.807 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.807 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.375 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.375 { 00:15:41.375 "cntlid": 37, 00:15:41.375 "qid": 0, 00:15:41.375 "state": "enabled", 00:15:41.375 "thread": "nvmf_tgt_poll_group_000", 00:15:41.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:41.375 "listen_address": { 00:15:41.375 "trtype": "TCP", 00:15:41.375 "adrfam": "IPv4", 00:15:41.375 "traddr": "10.0.0.2", 00:15:41.375 "trsvcid": "4420" 00:15:41.375 }, 00:15:41.375 "peer_address": { 00:15:41.375 "trtype": "TCP", 00:15:41.375 "adrfam": "IPv4", 00:15:41.375 "traddr": "10.0.0.1", 00:15:41.375 "trsvcid": "51260" 00:15:41.375 }, 00:15:41.375 "auth": { 00:15:41.375 "state": "completed", 00:15:41.375 "digest": "sha256", 00:15:41.375 "dhgroup": "ffdhe6144" 00:15:41.375 } 00:15:41.375 } 00:15:41.375 ]' 00:15:41.375 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.633 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.891 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:41.891 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.458 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.717 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.976 00:15:42.976 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.976 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.976 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.236 { 00:15:43.236 "cntlid": 39, 00:15:43.236 "qid": 0, 00:15:43.236 "state": "enabled", 00:15:43.236 "thread": "nvmf_tgt_poll_group_000", 00:15:43.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:43.236 "listen_address": { 00:15:43.236 "trtype": "TCP", 00:15:43.236 "adrfam": "IPv4", 00:15:43.236 "traddr": "10.0.0.2", 00:15:43.236 "trsvcid": "4420" 00:15:43.236 }, 00:15:43.236 "peer_address": { 00:15:43.236 "trtype": "TCP", 00:15:43.236 "adrfam": "IPv4", 00:15:43.236 "traddr": "10.0.0.1", 00:15:43.236 "trsvcid": "51270" 00:15:43.236 }, 00:15:43.236 "auth": { 00:15:43.236 "state": "completed", 00:15:43.236 "digest": "sha256", 00:15:43.236 "dhgroup": "ffdhe6144" 00:15:43.236 } 00:15:43.236 } 00:15:43.236 ]' 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.236 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.495 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:43.495 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:44.061 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.062 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.321 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.890 00:15:44.890 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.890 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.890 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.890 { 00:15:44.890 "cntlid": 41, 00:15:44.890 "qid": 0, 00:15:44.890 "state": "enabled", 00:15:44.890 "thread": "nvmf_tgt_poll_group_000", 00:15:44.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.890 "listen_address": { 00:15:44.890 "trtype": "TCP", 00:15:44.890 "adrfam": "IPv4", 00:15:44.890 "traddr": "10.0.0.2", 00:15:44.890 "trsvcid": "4420" 00:15:44.890 }, 00:15:44.890 "peer_address": { 00:15:44.890 "trtype": "TCP", 00:15:44.890 "adrfam": "IPv4", 00:15:44.890 "traddr": "10.0.0.1", 00:15:44.890 "trsvcid": "55466" 00:15:44.890 }, 00:15:44.890 "auth": { 00:15:44.890 "state": "completed", 00:15:44.890 "digest": "sha256", 00:15:44.890 "dhgroup": "ffdhe8192" 00:15:44.890 } 00:15:44.890 } 00:15:44.890 ]' 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.890 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.149 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.149 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.149 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.149 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.149 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.408 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:45.408 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.976 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.544 00:15:46.544 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.544 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.544 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.803 { 00:15:46.803 "cntlid": 43, 00:15:46.803 "qid": 0, 00:15:46.803 "state": "enabled", 00:15:46.803 "thread": "nvmf_tgt_poll_group_000", 00:15:46.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.803 "listen_address": { 00:15:46.803 "trtype": "TCP", 00:15:46.803 "adrfam": "IPv4", 00:15:46.803 "traddr": "10.0.0.2", 00:15:46.803 "trsvcid": "4420" 00:15:46.803 }, 00:15:46.803 "peer_address": { 00:15:46.803 "trtype": "TCP", 00:15:46.803 "adrfam": "IPv4", 00:15:46.803 "traddr": "10.0.0.1", 00:15:46.803 "trsvcid": "55476" 00:15:46.803 }, 00:15:46.803 "auth": { 00:15:46.803 "state": "completed", 00:15:46.803 "digest": "sha256", 00:15:46.803 "dhgroup": "ffdhe8192" 00:15:46.803 } 00:15:46.803 } 00:15:46.803 ]' 00:15:46.803 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.803 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.061 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:47.061 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.648 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.906 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.907 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.473 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.473 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.473 { 00:15:48.473 "cntlid": 45, 00:15:48.473 "qid": 0, 00:15:48.473 "state": "enabled", 00:15:48.473 "thread": "nvmf_tgt_poll_group_000", 00:15:48.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:48.473 "listen_address": { 00:15:48.473 "trtype": "TCP", 00:15:48.473 "adrfam": "IPv4", 00:15:48.473 "traddr": "10.0.0.2", 00:15:48.473 "trsvcid": "4420" 00:15:48.473 }, 00:15:48.473 "peer_address": { 00:15:48.473 "trtype": "TCP", 00:15:48.473 "adrfam": "IPv4", 00:15:48.473 "traddr": "10.0.0.1", 00:15:48.473 "trsvcid": "55510" 00:15:48.473 }, 00:15:48.473 "auth": { 00:15:48.473 "state": "completed", 00:15:48.473 "digest": "sha256", 00:15:48.473 "dhgroup": "ffdhe8192" 00:15:48.473 } 00:15:48.473 } 00:15:48.473 ]' 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.731 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.990 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:48.990 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.558 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.817 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.075 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.333 { 00:15:50.333 "cntlid": 47, 00:15:50.333 "qid": 0, 00:15:50.333 "state": "enabled", 00:15:50.333 "thread": "nvmf_tgt_poll_group_000", 00:15:50.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:50.333 "listen_address": { 00:15:50.333 "trtype": "TCP", 00:15:50.333 "adrfam": "IPv4", 00:15:50.333 "traddr": "10.0.0.2", 00:15:50.333 "trsvcid": "4420" 00:15:50.333 }, 00:15:50.333 "peer_address": { 00:15:50.333 "trtype": "TCP", 00:15:50.333 "adrfam": "IPv4", 00:15:50.333 "traddr": "10.0.0.1", 00:15:50.333 "trsvcid": "55550" 00:15:50.333 }, 00:15:50.333 "auth": { 00:15:50.333 "state": "completed", 00:15:50.333 "digest": "sha256", 00:15:50.333 "dhgroup": "ffdhe8192" 00:15:50.333 } 00:15:50.333 } 00:15:50.333 ]' 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.333 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.592 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.592 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.592 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.592 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.592 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.852 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:50.852 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.420 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.678 00:15:51.678 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.678 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.678 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.936 { 00:15:51.936 "cntlid": 49, 00:15:51.936 "qid": 0, 00:15:51.936 "state": "enabled", 00:15:51.936 "thread": "nvmf_tgt_poll_group_000", 00:15:51.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:51.936 "listen_address": { 00:15:51.936 "trtype": "TCP", 00:15:51.936 "adrfam": "IPv4", 00:15:51.936 "traddr": "10.0.0.2", 00:15:51.936 "trsvcid": "4420" 00:15:51.936 }, 00:15:51.936 "peer_address": { 00:15:51.936 "trtype": "TCP", 00:15:51.936 "adrfam": "IPv4", 00:15:51.936 "traddr": "10.0.0.1", 00:15:51.936 "trsvcid": "55576" 00:15:51.936 }, 00:15:51.936 "auth": { 00:15:51.936 "state": "completed", 00:15:51.936 "digest": "sha384", 00:15:51.936 "dhgroup": "null" 00:15:51.936 } 00:15:51.936 } 00:15:51.936 ]' 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.936 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.937 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.194 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.194 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.194 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.194 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:52.194 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:52.762 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.022 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.281 00:15:53.281 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.281 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.281 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.539 { 00:15:53.539 "cntlid": 51, 00:15:53.539 "qid": 0, 00:15:53.539 "state": "enabled", 00:15:53.539 "thread": "nvmf_tgt_poll_group_000", 00:15:53.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:53.539 "listen_address": { 00:15:53.539 "trtype": "TCP", 00:15:53.539 "adrfam": "IPv4", 00:15:53.539 "traddr": "10.0.0.2", 00:15:53.539 "trsvcid": "4420" 00:15:53.539 }, 00:15:53.539 "peer_address": { 00:15:53.539 "trtype": "TCP", 00:15:53.539 "adrfam": "IPv4", 00:15:53.539 "traddr": "10.0.0.1", 00:15:53.539 "trsvcid": "55602" 00:15:53.539 }, 00:15:53.539 "auth": { 00:15:53.539 "state": "completed", 00:15:53.539 "digest": "sha384", 00:15:53.539 "dhgroup": "null" 00:15:53.539 } 00:15:53.539 } 00:15:53.539 ]' 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.539 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.809 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:53.809 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.432 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.691 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.950 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.950 { 00:15:54.950 "cntlid": 53, 00:15:54.950 "qid": 0, 00:15:54.950 "state": "enabled", 00:15:54.950 "thread": "nvmf_tgt_poll_group_000", 00:15:54.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:54.950 "listen_address": { 00:15:54.950 "trtype": "TCP", 00:15:54.950 "adrfam": "IPv4", 00:15:54.950 "traddr": "10.0.0.2", 00:15:54.950 "trsvcid": "4420" 00:15:54.950 }, 00:15:54.950 "peer_address": { 00:15:54.950 "trtype": "TCP", 00:15:54.950 "adrfam": "IPv4", 00:15:54.950 "traddr": "10.0.0.1", 00:15:54.950 "trsvcid": "58530" 00:15:54.950 }, 00:15:54.950 "auth": { 00:15:54.950 "state": "completed", 00:15:54.950 "digest": "sha384", 00:15:54.950 "dhgroup": "null" 00:15:54.950 } 00:15:54.950 } 00:15:54.950 ]' 00:15:54.950 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.208 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.466 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:55.467 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.034 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.293 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.293 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.552 { 00:15:56.552 "cntlid": 55, 00:15:56.552 "qid": 0, 00:15:56.552 "state": "enabled", 00:15:56.552 "thread": "nvmf_tgt_poll_group_000", 00:15:56.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:56.552 "listen_address": { 00:15:56.552 "trtype": "TCP", 00:15:56.552 "adrfam": "IPv4", 00:15:56.552 "traddr": "10.0.0.2", 00:15:56.552 "trsvcid": "4420" 00:15:56.552 }, 00:15:56.552 "peer_address": { 00:15:56.552 "trtype": "TCP", 00:15:56.552 "adrfam": "IPv4", 00:15:56.552 "traddr": "10.0.0.1", 00:15:56.552 "trsvcid": "58554" 00:15:56.552 }, 00:15:56.552 "auth": { 00:15:56.552 "state": "completed", 00:15:56.552 "digest": "sha384", 00:15:56.552 "dhgroup": "null" 00:15:56.552 } 00:15:56.552 } 00:15:56.552 ]' 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.552 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.811 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.811 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.811 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.811 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.811 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.069 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:57.069 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.636 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.637 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.637 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.896 00:15:57.896 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.896 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.896 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.154 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.154 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.155 { 00:15:58.155 "cntlid": 57, 00:15:58.155 "qid": 0, 00:15:58.155 "state": "enabled", 00:15:58.155 "thread": "nvmf_tgt_poll_group_000", 00:15:58.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:58.155 "listen_address": { 00:15:58.155 "trtype": "TCP", 00:15:58.155 "adrfam": "IPv4", 00:15:58.155 "traddr": "10.0.0.2", 00:15:58.155 "trsvcid": "4420" 00:15:58.155 }, 00:15:58.155 "peer_address": { 00:15:58.155 "trtype": "TCP", 00:15:58.155 "adrfam": "IPv4", 00:15:58.155 "traddr": "10.0.0.1", 00:15:58.155 "trsvcid": "58578" 00:15:58.155 }, 00:15:58.155 "auth": { 00:15:58.155 "state": "completed", 00:15:58.155 "digest": "sha384", 00:15:58.155 "dhgroup": "ffdhe2048" 00:15:58.155 } 00:15:58.155 } 00:15:58.155 ]' 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.155 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.413 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:58.414 12:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.981 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.240 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.499 00:15:59.499 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.499 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.499 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.758 { 00:15:59.758 "cntlid": 59, 00:15:59.758 "qid": 0, 00:15:59.758 "state": "enabled", 00:15:59.758 "thread": "nvmf_tgt_poll_group_000", 00:15:59.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:59.758 "listen_address": { 00:15:59.758 "trtype": "TCP", 00:15:59.758 "adrfam": "IPv4", 00:15:59.758 "traddr": "10.0.0.2", 00:15:59.758 "trsvcid": "4420" 00:15:59.758 }, 00:15:59.758 "peer_address": { 00:15:59.758 "trtype": "TCP", 00:15:59.758 "adrfam": "IPv4", 00:15:59.758 "traddr": "10.0.0.1", 00:15:59.758 "trsvcid": "58600" 00:15:59.758 }, 00:15:59.758 "auth": { 00:15:59.758 "state": "completed", 00:15:59.758 "digest": "sha384", 00:15:59.758 "dhgroup": "ffdhe2048" 00:15:59.758 } 00:15:59.758 } 00:15:59.758 ]' 00:15:59.758 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.758 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.017 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:00.017 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.585 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.844 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.102 00:16:01.102 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.102 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.102 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.361 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.361 { 00:16:01.361 "cntlid": 61, 00:16:01.361 "qid": 0, 00:16:01.361 "state": "enabled", 00:16:01.361 "thread": "nvmf_tgt_poll_group_000", 00:16:01.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.361 "listen_address": { 00:16:01.361 "trtype": "TCP", 00:16:01.361 "adrfam": "IPv4", 00:16:01.361 "traddr": "10.0.0.2", 00:16:01.361 "trsvcid": "4420" 00:16:01.361 }, 00:16:01.361 "peer_address": { 00:16:01.361 "trtype": "TCP", 00:16:01.361 "adrfam": "IPv4", 00:16:01.362 "traddr": "10.0.0.1", 00:16:01.362 "trsvcid": "58618" 00:16:01.362 }, 00:16:01.362 "auth": { 00:16:01.362 "state": "completed", 00:16:01.362 "digest": "sha384", 00:16:01.362 "dhgroup": "ffdhe2048" 00:16:01.362 } 00:16:01.362 } 00:16:01.362 ]' 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.362 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.621 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:01.621 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.188 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.447 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.704 00:16:02.704 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.704 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.704 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.963 { 00:16:02.963 "cntlid": 63, 00:16:02.963 "qid": 0, 00:16:02.963 "state": "enabled", 00:16:02.963 "thread": "nvmf_tgt_poll_group_000", 00:16:02.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:02.963 "listen_address": { 00:16:02.963 "trtype": "TCP", 00:16:02.963 "adrfam": "IPv4", 00:16:02.963 "traddr": "10.0.0.2", 00:16:02.963 "trsvcid": "4420" 00:16:02.963 }, 00:16:02.963 "peer_address": { 00:16:02.963 "trtype": "TCP", 00:16:02.963 "adrfam": "IPv4", 00:16:02.963 "traddr": "10.0.0.1", 00:16:02.963 "trsvcid": "58648" 00:16:02.963 }, 00:16:02.963 "auth": { 00:16:02.963 "state": "completed", 00:16:02.963 "digest": "sha384", 00:16:02.963 "dhgroup": "ffdhe2048" 00:16:02.963 } 00:16:02.963 } 00:16:02.963 ]' 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.963 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.222 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:03.222 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.790 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.049 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.308 00:16:04.308 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.308 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.308 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.567 { 00:16:04.567 "cntlid": 65, 00:16:04.567 "qid": 0, 00:16:04.567 "state": "enabled", 00:16:04.567 "thread": "nvmf_tgt_poll_group_000", 00:16:04.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:04.567 "listen_address": { 00:16:04.567 "trtype": "TCP", 00:16:04.567 "adrfam": "IPv4", 00:16:04.567 "traddr": "10.0.0.2", 00:16:04.567 "trsvcid": "4420" 00:16:04.567 }, 00:16:04.567 "peer_address": { 00:16:04.567 "trtype": "TCP", 00:16:04.567 "adrfam": "IPv4", 00:16:04.567 "traddr": "10.0.0.1", 00:16:04.567 "trsvcid": "55904" 00:16:04.567 }, 00:16:04.567 "auth": { 00:16:04.567 "state": "completed", 00:16:04.567 "digest": "sha384", 00:16:04.567 "dhgroup": "ffdhe3072" 00:16:04.567 } 00:16:04.567 } 00:16:04.567 ]' 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.567 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.568 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.827 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:04.827 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.395 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.654 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.913 00:16:05.913 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.913 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.913 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.172 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.172 { 00:16:06.172 "cntlid": 67, 00:16:06.172 "qid": 0, 00:16:06.172 "state": "enabled", 00:16:06.172 "thread": "nvmf_tgt_poll_group_000", 00:16:06.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.172 "listen_address": { 00:16:06.172 "trtype": "TCP", 00:16:06.172 "adrfam": "IPv4", 00:16:06.172 "traddr": "10.0.0.2", 00:16:06.172 "trsvcid": "4420" 00:16:06.172 }, 00:16:06.172 "peer_address": { 00:16:06.172 "trtype": "TCP", 00:16:06.172 "adrfam": "IPv4", 00:16:06.172 "traddr": "10.0.0.1", 00:16:06.172 "trsvcid": "55916" 00:16:06.172 }, 00:16:06.172 "auth": { 00:16:06.172 "state": "completed", 00:16:06.172 "digest": "sha384", 00:16:06.172 "dhgroup": "ffdhe3072" 00:16:06.172 } 00:16:06.172 } 00:16:06.172 ]' 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.173 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.432 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:06.432 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.999 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.257 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.258 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.258 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.258 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.258 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.258 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.516 00:16:07.516 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.516 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.516 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.776 { 00:16:07.776 "cntlid": 69, 00:16:07.776 "qid": 0, 00:16:07.776 "state": "enabled", 00:16:07.776 "thread": "nvmf_tgt_poll_group_000", 00:16:07.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:07.776 "listen_address": { 00:16:07.776 "trtype": "TCP", 00:16:07.776 "adrfam": "IPv4", 00:16:07.776 "traddr": "10.0.0.2", 00:16:07.776 "trsvcid": "4420" 00:16:07.776 }, 00:16:07.776 "peer_address": { 00:16:07.776 "trtype": "TCP", 00:16:07.776 "adrfam": "IPv4", 00:16:07.776 "traddr": "10.0.0.1", 00:16:07.776 "trsvcid": "55940" 00:16:07.776 }, 00:16:07.776 "auth": { 00:16:07.776 "state": "completed", 00:16:07.776 "digest": "sha384", 00:16:07.776 "dhgroup": "ffdhe3072" 00:16:07.776 } 00:16:07.776 } 00:16:07.776 ]' 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.776 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.035 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:08.035 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.870 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.870 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.870 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.870 00:16:08.870 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.870 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.128 { 00:16:09.128 "cntlid": 71, 00:16:09.128 "qid": 0, 00:16:09.128 "state": "enabled", 00:16:09.128 "thread": "nvmf_tgt_poll_group_000", 00:16:09.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.128 "listen_address": { 00:16:09.128 "trtype": "TCP", 00:16:09.128 "adrfam": "IPv4", 00:16:09.128 "traddr": "10.0.0.2", 00:16:09.128 "trsvcid": "4420" 00:16:09.128 }, 00:16:09.128 "peer_address": { 00:16:09.128 "trtype": "TCP", 00:16:09.128 "adrfam": "IPv4", 00:16:09.128 "traddr": "10.0.0.1", 00:16:09.128 "trsvcid": "55970" 00:16:09.128 }, 00:16:09.128 "auth": { 00:16:09.128 "state": "completed", 00:16:09.128 "digest": "sha384", 00:16:09.128 "dhgroup": "ffdhe3072" 00:16:09.128 } 00:16:09.128 } 00:16:09.128 ]' 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.128 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.388 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.388 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.388 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.388 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.388 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.647 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:09.647 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.215 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.473 00:16:10.732 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.732 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.732 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.732 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.732 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.732 { 00:16:10.732 "cntlid": 73, 00:16:10.732 "qid": 0, 00:16:10.732 "state": "enabled", 00:16:10.732 "thread": "nvmf_tgt_poll_group_000", 00:16:10.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.732 "listen_address": { 00:16:10.732 "trtype": "TCP", 00:16:10.732 "adrfam": "IPv4", 00:16:10.732 "traddr": "10.0.0.2", 00:16:10.732 "trsvcid": "4420" 00:16:10.732 }, 00:16:10.732 "peer_address": { 00:16:10.732 "trtype": "TCP", 00:16:10.732 "adrfam": "IPv4", 00:16:10.732 "traddr": "10.0.0.1", 00:16:10.732 "trsvcid": "56006" 00:16:10.732 }, 00:16:10.732 "auth": { 00:16:10.732 "state": "completed", 00:16:10.732 "digest": "sha384", 00:16:10.732 "dhgroup": "ffdhe4096" 00:16:10.732 } 00:16:10.732 } 00:16:10.732 ]' 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.732 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.991 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.991 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.991 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.991 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.991 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.250 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:11.250 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.819 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.819 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.078 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.338 { 00:16:12.338 "cntlid": 75, 00:16:12.338 "qid": 0, 00:16:12.338 "state": "enabled", 00:16:12.338 "thread": "nvmf_tgt_poll_group_000", 00:16:12.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.338 "listen_address": { 00:16:12.338 "trtype": "TCP", 00:16:12.338 "adrfam": "IPv4", 00:16:12.338 "traddr": "10.0.0.2", 00:16:12.338 "trsvcid": "4420" 00:16:12.338 }, 00:16:12.338 "peer_address": { 00:16:12.338 "trtype": "TCP", 00:16:12.338 "adrfam": "IPv4", 00:16:12.338 "traddr": "10.0.0.1", 00:16:12.338 "trsvcid": "56022" 00:16:12.338 }, 00:16:12.338 "auth": { 00:16:12.338 "state": "completed", 00:16:12.338 "digest": "sha384", 00:16:12.338 "dhgroup": "ffdhe4096" 00:16:12.338 } 00:16:12.338 } 00:16:12.338 ]' 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.338 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.597 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.597 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.597 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.597 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.597 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.856 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:12.856 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.424 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.683 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.943 { 00:16:13.943 "cntlid": 77, 00:16:13.943 "qid": 0, 00:16:13.943 "state": "enabled", 00:16:13.943 "thread": "nvmf_tgt_poll_group_000", 00:16:13.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.943 "listen_address": { 00:16:13.943 "trtype": "TCP", 00:16:13.943 "adrfam": "IPv4", 00:16:13.943 "traddr": "10.0.0.2", 00:16:13.943 "trsvcid": "4420" 00:16:13.943 }, 00:16:13.943 "peer_address": { 00:16:13.943 "trtype": "TCP", 00:16:13.943 "adrfam": "IPv4", 00:16:13.943 "traddr": "10.0.0.1", 00:16:13.943 "trsvcid": "40082" 00:16:13.943 }, 00:16:13.943 "auth": { 00:16:13.943 "state": "completed", 00:16:13.943 "digest": "sha384", 00:16:13.943 "dhgroup": "ffdhe4096" 00:16:13.943 } 00:16:13.943 } 00:16:13.943 ]' 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.943 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:14.202 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:14.771 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.771 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.771 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.771 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.030 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.598 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.598 { 00:16:15.598 "cntlid": 79, 00:16:15.598 "qid": 0, 00:16:15.598 "state": "enabled", 00:16:15.598 "thread": "nvmf_tgt_poll_group_000", 00:16:15.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.598 "listen_address": { 00:16:15.598 "trtype": "TCP", 00:16:15.598 "adrfam": "IPv4", 00:16:15.598 "traddr": "10.0.0.2", 00:16:15.598 "trsvcid": "4420" 00:16:15.598 }, 00:16:15.598 "peer_address": { 00:16:15.598 "trtype": "TCP", 00:16:15.598 "adrfam": "IPv4", 00:16:15.598 "traddr": "10.0.0.1", 00:16:15.598 "trsvcid": "40114" 00:16:15.598 }, 00:16:15.598 "auth": { 00:16:15.598 "state": "completed", 00:16:15.598 "digest": "sha384", 00:16:15.598 "dhgroup": "ffdhe4096" 00:16:15.598 } 00:16:15.598 } 00:16:15.598 ]' 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.598 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.857 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.857 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.857 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.857 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.857 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.857 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:15.857 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.425 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.685 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.253 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.253 { 00:16:17.253 "cntlid": 81, 00:16:17.253 "qid": 0, 00:16:17.253 "state": "enabled", 00:16:17.253 "thread": "nvmf_tgt_poll_group_000", 00:16:17.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.253 "listen_address": { 00:16:17.253 "trtype": "TCP", 00:16:17.253 "adrfam": "IPv4", 00:16:17.253 "traddr": "10.0.0.2", 00:16:17.253 "trsvcid": "4420" 00:16:17.253 }, 00:16:17.253 "peer_address": { 00:16:17.253 "trtype": "TCP", 00:16:17.253 "adrfam": "IPv4", 00:16:17.253 "traddr": "10.0.0.1", 00:16:17.253 "trsvcid": "40140" 00:16:17.253 }, 00:16:17.253 "auth": { 00:16:17.253 "state": "completed", 00:16:17.253 "digest": "sha384", 00:16:17.253 "dhgroup": "ffdhe6144" 00:16:17.253 } 00:16:17.253 } 00:16:17.253 ]' 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.253 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.513 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.513 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.513 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.513 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:17.513 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:18.081 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:18.082 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.341 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.909 00:16:18.909 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.909 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.909 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.909 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.909 { 00:16:18.909 "cntlid": 83, 00:16:18.909 "qid": 0, 00:16:18.909 "state": "enabled", 00:16:18.909 "thread": "nvmf_tgt_poll_group_000", 00:16:18.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.910 "listen_address": { 00:16:18.910 "trtype": "TCP", 00:16:18.910 "adrfam": "IPv4", 00:16:18.910 "traddr": "10.0.0.2", 00:16:18.910 "trsvcid": "4420" 00:16:18.910 }, 00:16:18.910 "peer_address": { 00:16:18.910 "trtype": "TCP", 00:16:18.910 "adrfam": "IPv4", 00:16:18.910 "traddr": "10.0.0.1", 00:16:18.910 "trsvcid": "40164" 00:16:18.910 }, 00:16:18.910 "auth": { 00:16:18.910 "state": "completed", 00:16:18.910 "digest": "sha384", 00:16:18.910 "dhgroup": "ffdhe6144" 00:16:18.910 } 00:16:18.910 } 00:16:18.910 ]' 00:16:18.910 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.910 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.910 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.910 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.910 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.169 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.169 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.169 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.169 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:19.169 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.741 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.000 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.259 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.518 { 00:16:20.518 "cntlid": 85, 00:16:20.518 "qid": 0, 00:16:20.518 "state": "enabled", 00:16:20.518 "thread": "nvmf_tgt_poll_group_000", 00:16:20.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.518 "listen_address": { 00:16:20.518 "trtype": "TCP", 00:16:20.518 "adrfam": "IPv4", 00:16:20.518 "traddr": "10.0.0.2", 00:16:20.518 "trsvcid": "4420" 00:16:20.518 }, 00:16:20.518 "peer_address": { 00:16:20.518 "trtype": "TCP", 00:16:20.518 "adrfam": "IPv4", 00:16:20.518 "traddr": "10.0.0.1", 00:16:20.518 "trsvcid": "40204" 00:16:20.518 }, 00:16:20.518 "auth": { 00:16:20.518 "state": "completed", 00:16:20.518 "digest": "sha384", 00:16:20.518 "dhgroup": "ffdhe6144" 00:16:20.518 } 00:16:20.518 } 00:16:20.518 ]' 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.518 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.777 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.777 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.777 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.777 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.777 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.036 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:21.036 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.604 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.173 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.173 { 00:16:22.173 "cntlid": 87, 00:16:22.173 "qid": 0, 00:16:22.173 "state": "enabled", 00:16:22.173 "thread": "nvmf_tgt_poll_group_000", 00:16:22.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:22.173 "listen_address": { 00:16:22.173 "trtype": "TCP", 00:16:22.173 "adrfam": "IPv4", 00:16:22.173 "traddr": "10.0.0.2", 00:16:22.173 "trsvcid": "4420" 00:16:22.173 }, 00:16:22.173 "peer_address": { 00:16:22.173 "trtype": "TCP", 00:16:22.173 "adrfam": "IPv4", 00:16:22.173 "traddr": "10.0.0.1", 00:16:22.173 "trsvcid": "40226" 00:16:22.173 }, 00:16:22.173 "auth": { 00:16:22.173 "state": "completed", 00:16:22.173 "digest": "sha384", 00:16:22.173 "dhgroup": "ffdhe6144" 00:16:22.173 } 00:16:22.173 } 00:16:22.173 ]' 00:16:22.173 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.432 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.433 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.692 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:22.692 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.261 12:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.829 00:16:23.829 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.829 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.829 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.088 { 00:16:24.088 "cntlid": 89, 00:16:24.088 "qid": 0, 00:16:24.088 "state": "enabled", 00:16:24.088 "thread": "nvmf_tgt_poll_group_000", 00:16:24.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:24.088 "listen_address": { 00:16:24.088 "trtype": "TCP", 00:16:24.088 "adrfam": "IPv4", 00:16:24.088 "traddr": "10.0.0.2", 00:16:24.088 "trsvcid": "4420" 00:16:24.088 }, 00:16:24.088 "peer_address": { 00:16:24.088 "trtype": "TCP", 00:16:24.088 "adrfam": "IPv4", 00:16:24.088 "traddr": "10.0.0.1", 00:16:24.088 "trsvcid": "51830" 00:16:24.088 }, 00:16:24.088 "auth": { 00:16:24.088 "state": "completed", 00:16:24.088 "digest": "sha384", 00:16:24.088 "dhgroup": "ffdhe8192" 00:16:24.088 } 00:16:24.088 } 00:16:24.088 ]' 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.088 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.089 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.347 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:24.347 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:24.914 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.915 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.172 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:25.172 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.172 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.173 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.740 00:16:25.740 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.740 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.740 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.740 { 00:16:25.740 "cntlid": 91, 00:16:25.740 "qid": 0, 00:16:25.740 "state": "enabled", 00:16:25.740 "thread": "nvmf_tgt_poll_group_000", 00:16:25.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.740 "listen_address": { 00:16:25.740 "trtype": "TCP", 00:16:25.740 "adrfam": "IPv4", 00:16:25.740 "traddr": "10.0.0.2", 00:16:25.740 "trsvcid": "4420" 00:16:25.740 }, 00:16:25.740 "peer_address": { 00:16:25.740 "trtype": "TCP", 00:16:25.740 "adrfam": "IPv4", 00:16:25.740 "traddr": "10.0.0.1", 00:16:25.740 "trsvcid": "51852" 00:16:25.740 }, 00:16:25.740 "auth": { 00:16:25.740 "state": "completed", 00:16:25.740 "digest": "sha384", 00:16:25.740 "dhgroup": "ffdhe8192" 00:16:25.740 } 00:16:25.740 } 00:16:25.740 ]' 00:16:25.740 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.999 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.258 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:26.258 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.827 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.827 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:26.827 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.827 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.827 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.086 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.345 00:16:27.345 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.345 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.345 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.604 { 00:16:27.604 "cntlid": 93, 00:16:27.604 "qid": 0, 00:16:27.604 "state": "enabled", 00:16:27.604 "thread": "nvmf_tgt_poll_group_000", 00:16:27.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:27.604 "listen_address": { 00:16:27.604 "trtype": "TCP", 00:16:27.604 "adrfam": "IPv4", 00:16:27.604 "traddr": "10.0.0.2", 00:16:27.604 "trsvcid": "4420" 00:16:27.604 }, 00:16:27.604 "peer_address": { 00:16:27.604 "trtype": "TCP", 00:16:27.604 "adrfam": "IPv4", 00:16:27.604 "traddr": "10.0.0.1", 00:16:27.604 "trsvcid": "51896" 00:16:27.604 }, 00:16:27.604 "auth": { 00:16:27.604 "state": "completed", 00:16:27.604 "digest": "sha384", 00:16:27.604 "dhgroup": "ffdhe8192" 00:16:27.604 } 00:16:27.604 } 00:16:27.604 ]' 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.604 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.863 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.863 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.863 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.863 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.863 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.863 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:27.863 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:28.432 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.432 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:28.432 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.432 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.691 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.257 00:16:29.257 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.257 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.257 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.514 { 00:16:29.514 "cntlid": 95, 00:16:29.514 "qid": 0, 00:16:29.514 "state": "enabled", 00:16:29.514 "thread": "nvmf_tgt_poll_group_000", 00:16:29.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.514 "listen_address": { 00:16:29.514 "trtype": "TCP", 00:16:29.514 "adrfam": "IPv4", 00:16:29.514 "traddr": "10.0.0.2", 00:16:29.514 "trsvcid": "4420" 00:16:29.514 }, 00:16:29.514 "peer_address": { 00:16:29.514 "trtype": "TCP", 00:16:29.514 "adrfam": "IPv4", 00:16:29.514 "traddr": "10.0.0.1", 00:16:29.514 "trsvcid": "51924" 00:16:29.514 }, 00:16:29.514 "auth": { 00:16:29.514 "state": "completed", 00:16:29.514 "digest": "sha384", 00:16:29.514 "dhgroup": "ffdhe8192" 00:16:29.514 } 00:16:29.514 } 00:16:29.514 ]' 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.514 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.773 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:29.773 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:30.339 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.339 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.339 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.339 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.339 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.340 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:30.340 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.340 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.340 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.340 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.600 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.860 00:16:30.860 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.860 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.860 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.127 { 00:16:31.127 "cntlid": 97, 00:16:31.127 "qid": 0, 00:16:31.127 "state": "enabled", 00:16:31.127 "thread": "nvmf_tgt_poll_group_000", 00:16:31.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.127 "listen_address": { 00:16:31.127 "trtype": "TCP", 00:16:31.127 "adrfam": "IPv4", 00:16:31.127 "traddr": "10.0.0.2", 00:16:31.127 "trsvcid": "4420" 00:16:31.127 }, 00:16:31.127 "peer_address": { 00:16:31.127 "trtype": "TCP", 00:16:31.127 "adrfam": "IPv4", 00:16:31.127 "traddr": "10.0.0.1", 00:16:31.127 "trsvcid": "51954" 00:16:31.127 }, 00:16:31.127 "auth": { 00:16:31.127 "state": "completed", 00:16:31.127 "digest": "sha512", 00:16:31.127 "dhgroup": "null" 00:16:31.127 } 00:16:31.127 } 00:16:31.127 ]' 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.127 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.428 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:31.428 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.044 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.303 00:16:32.303 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.303 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.303 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.562 { 00:16:32.562 "cntlid": 99, 00:16:32.562 "qid": 0, 00:16:32.562 "state": "enabled", 00:16:32.562 "thread": "nvmf_tgt_poll_group_000", 00:16:32.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:32.562 "listen_address": { 00:16:32.562 "trtype": "TCP", 00:16:32.562 "adrfam": "IPv4", 00:16:32.562 "traddr": "10.0.0.2", 00:16:32.562 "trsvcid": "4420" 00:16:32.562 }, 00:16:32.562 "peer_address": { 00:16:32.562 "trtype": "TCP", 00:16:32.562 "adrfam": "IPv4", 00:16:32.562 "traddr": "10.0.0.1", 00:16:32.562 "trsvcid": "51972" 00:16:32.562 }, 00:16:32.562 "auth": { 00:16:32.562 "state": "completed", 00:16:32.562 "digest": "sha512", 00:16:32.562 "dhgroup": "null" 00:16:32.562 } 00:16:32.562 } 00:16:32.562 ]' 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.562 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.822 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.822 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.822 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.822 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:32.822 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:33.389 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.389 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.389 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.390 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.390 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.390 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.390 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.390 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.649 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.908 00:16:33.908 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.908 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.908 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.167 { 00:16:34.167 "cntlid": 101, 00:16:34.167 "qid": 0, 00:16:34.167 "state": "enabled", 00:16:34.167 "thread": "nvmf_tgt_poll_group_000", 00:16:34.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.167 "listen_address": { 00:16:34.167 "trtype": "TCP", 00:16:34.167 "adrfam": "IPv4", 00:16:34.167 "traddr": "10.0.0.2", 00:16:34.167 "trsvcid": "4420" 00:16:34.167 }, 00:16:34.167 "peer_address": { 00:16:34.167 "trtype": "TCP", 00:16:34.167 "adrfam": "IPv4", 00:16:34.167 "traddr": "10.0.0.1", 00:16:34.167 "trsvcid": "32962" 00:16:34.167 }, 00:16:34.167 "auth": { 00:16:34.167 "state": "completed", 00:16:34.167 "digest": "sha512", 00:16:34.167 "dhgroup": "null" 00:16:34.167 } 00:16:34.167 } 00:16:34.167 ]' 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.167 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.426 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:34.426 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.995 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.254 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.513 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.513 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.513 { 00:16:35.513 "cntlid": 103, 00:16:35.513 "qid": 0, 00:16:35.513 "state": "enabled", 00:16:35.513 "thread": "nvmf_tgt_poll_group_000", 00:16:35.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.513 "listen_address": { 00:16:35.513 "trtype": "TCP", 00:16:35.513 "adrfam": "IPv4", 00:16:35.513 "traddr": "10.0.0.2", 00:16:35.513 "trsvcid": "4420" 00:16:35.513 }, 00:16:35.513 "peer_address": { 00:16:35.513 "trtype": "TCP", 00:16:35.513 "adrfam": "IPv4", 00:16:35.513 "traddr": "10.0.0.1", 00:16:35.513 "trsvcid": "32984" 00:16:35.513 }, 00:16:35.513 "auth": { 00:16:35.513 "state": "completed", 00:16:35.513 "digest": "sha512", 00:16:35.513 "dhgroup": "null" 00:16:35.513 } 00:16:35.513 } 00:16:35.513 ]' 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.772 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.031 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:36.031 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.599 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.858 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.117 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.117 { 00:16:37.117 "cntlid": 105, 00:16:37.117 "qid": 0, 00:16:37.117 "state": "enabled", 00:16:37.117 "thread": "nvmf_tgt_poll_group_000", 00:16:37.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.117 "listen_address": { 00:16:37.117 "trtype": "TCP", 00:16:37.117 "adrfam": "IPv4", 00:16:37.117 "traddr": "10.0.0.2", 00:16:37.117 "trsvcid": "4420" 00:16:37.117 }, 00:16:37.117 "peer_address": { 00:16:37.117 "trtype": "TCP", 00:16:37.117 "adrfam": "IPv4", 00:16:37.117 "traddr": "10.0.0.1", 00:16:37.117 "trsvcid": "33014" 00:16:37.117 }, 00:16:37.117 "auth": { 00:16:37.117 "state": "completed", 00:16:37.117 "digest": "sha512", 00:16:37.117 "dhgroup": "ffdhe2048" 00:16:37.117 } 00:16:37.117 } 00:16:37.117 ]' 00:16:37.117 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.375 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.634 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:37.634 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.199 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.458 00:16:38.458 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.458 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.458 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.717 { 00:16:38.717 "cntlid": 107, 00:16:38.717 "qid": 0, 00:16:38.717 "state": "enabled", 00:16:38.717 "thread": "nvmf_tgt_poll_group_000", 00:16:38.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.717 "listen_address": { 00:16:38.717 "trtype": "TCP", 00:16:38.717 "adrfam": "IPv4", 00:16:38.717 "traddr": "10.0.0.2", 00:16:38.717 "trsvcid": "4420" 00:16:38.717 }, 00:16:38.717 "peer_address": { 00:16:38.717 "trtype": "TCP", 00:16:38.717 "adrfam": "IPv4", 00:16:38.717 "traddr": "10.0.0.1", 00:16:38.717 "trsvcid": "33046" 00:16:38.717 }, 00:16:38.717 "auth": { 00:16:38.717 "state": "completed", 00:16:38.717 "digest": "sha512", 00:16:38.717 "dhgroup": "ffdhe2048" 00:16:38.717 } 00:16:38.717 } 00:16:38.717 ]' 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.717 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.717 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.717 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.976 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.976 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.976 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.976 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:38.976 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.542 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.801 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.060 00:16:40.060 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.060 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.060 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.320 { 00:16:40.320 "cntlid": 109, 00:16:40.320 "qid": 0, 00:16:40.320 "state": "enabled", 00:16:40.320 "thread": "nvmf_tgt_poll_group_000", 00:16:40.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.320 "listen_address": { 00:16:40.320 "trtype": "TCP", 00:16:40.320 "adrfam": "IPv4", 00:16:40.320 "traddr": "10.0.0.2", 00:16:40.320 "trsvcid": "4420" 00:16:40.320 }, 00:16:40.320 "peer_address": { 00:16:40.320 "trtype": "TCP", 00:16:40.320 "adrfam": "IPv4", 00:16:40.320 "traddr": "10.0.0.1", 00:16:40.320 "trsvcid": "33082" 00:16:40.320 }, 00:16:40.320 "auth": { 00:16:40.320 "state": "completed", 00:16:40.320 "digest": "sha512", 00:16:40.320 "dhgroup": "ffdhe2048" 00:16:40.320 } 00:16:40.320 } 00:16:40.320 ]' 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.320 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.579 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:40.579 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:41.146 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.146 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.146 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.146 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.146 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.147 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.147 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.147 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.405 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.664 00:16:41.664 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.664 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.664 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.922 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.922 { 00:16:41.922 "cntlid": 111, 00:16:41.922 "qid": 0, 00:16:41.922 "state": "enabled", 00:16:41.922 "thread": "nvmf_tgt_poll_group_000", 00:16:41.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.922 "listen_address": { 00:16:41.923 "trtype": "TCP", 00:16:41.923 "adrfam": "IPv4", 00:16:41.923 "traddr": "10.0.0.2", 00:16:41.923 "trsvcid": "4420" 00:16:41.923 }, 00:16:41.923 "peer_address": { 00:16:41.923 "trtype": "TCP", 00:16:41.923 "adrfam": "IPv4", 00:16:41.923 "traddr": "10.0.0.1", 00:16:41.923 "trsvcid": "33106" 00:16:41.923 }, 00:16:41.923 "auth": { 00:16:41.923 "state": "completed", 00:16:41.923 "digest": "sha512", 00:16:41.923 "dhgroup": "ffdhe2048" 00:16:41.923 } 00:16:41.923 } 00:16:41.923 ]' 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.923 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.181 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:42.181 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.747 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.006 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.265 00:16:43.265 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.265 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.265 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.523 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.523 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.524 { 00:16:43.524 "cntlid": 113, 00:16:43.524 "qid": 0, 00:16:43.524 "state": "enabled", 00:16:43.524 "thread": "nvmf_tgt_poll_group_000", 00:16:43.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.524 "listen_address": { 00:16:43.524 "trtype": "TCP", 00:16:43.524 "adrfam": "IPv4", 00:16:43.524 "traddr": "10.0.0.2", 00:16:43.524 "trsvcid": "4420" 00:16:43.524 }, 00:16:43.524 "peer_address": { 00:16:43.524 "trtype": "TCP", 00:16:43.524 "adrfam": "IPv4", 00:16:43.524 "traddr": "10.0.0.1", 00:16:43.524 "trsvcid": "33132" 00:16:43.524 }, 00:16:43.524 "auth": { 00:16:43.524 "state": "completed", 00:16:43.524 "digest": "sha512", 00:16:43.524 "dhgroup": "ffdhe3072" 00:16:43.524 } 00:16:43.524 } 00:16:43.524 ]' 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.524 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.782 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:43.782 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.349 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.608 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.867 00:16:44.867 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.867 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.867 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.126 { 00:16:45.126 "cntlid": 115, 00:16:45.126 "qid": 0, 00:16:45.126 "state": "enabled", 00:16:45.126 "thread": "nvmf_tgt_poll_group_000", 00:16:45.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.126 "listen_address": { 00:16:45.126 "trtype": "TCP", 00:16:45.126 "adrfam": "IPv4", 00:16:45.126 "traddr": "10.0.0.2", 00:16:45.126 "trsvcid": "4420" 00:16:45.126 }, 00:16:45.126 "peer_address": { 00:16:45.126 "trtype": "TCP", 00:16:45.126 "adrfam": "IPv4", 00:16:45.126 "traddr": "10.0.0.1", 00:16:45.126 "trsvcid": "49072" 00:16:45.126 }, 00:16:45.126 "auth": { 00:16:45.126 "state": "completed", 00:16:45.126 "digest": "sha512", 00:16:45.126 "dhgroup": "ffdhe3072" 00:16:45.126 } 00:16:45.126 } 00:16:45.126 ]' 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.126 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.385 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:45.385 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.953 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.212 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.471 00:16:46.471 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.471 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.471 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.729 { 00:16:46.729 "cntlid": 117, 00:16:46.729 "qid": 0, 00:16:46.729 "state": "enabled", 00:16:46.729 "thread": "nvmf_tgt_poll_group_000", 00:16:46.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.729 "listen_address": { 00:16:46.729 "trtype": "TCP", 00:16:46.729 "adrfam": "IPv4", 00:16:46.729 "traddr": "10.0.0.2", 00:16:46.729 "trsvcid": "4420" 00:16:46.729 }, 00:16:46.729 "peer_address": { 00:16:46.729 "trtype": "TCP", 00:16:46.729 "adrfam": "IPv4", 00:16:46.729 "traddr": "10.0.0.1", 00:16:46.729 "trsvcid": "49108" 00:16:46.729 }, 00:16:46.729 "auth": { 00:16:46.729 "state": "completed", 00:16:46.729 "digest": "sha512", 00:16:46.729 "dhgroup": "ffdhe3072" 00:16:46.729 } 00:16:46.729 } 00:16:46.729 ]' 00:16:46.729 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.730 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.730 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.730 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.730 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.730 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.730 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.730 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.987 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:46.987 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.551 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.808 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.066 00:16:48.066 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.066 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.066 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.324 { 00:16:48.324 "cntlid": 119, 00:16:48.324 "qid": 0, 00:16:48.324 "state": "enabled", 00:16:48.324 "thread": "nvmf_tgt_poll_group_000", 00:16:48.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.324 "listen_address": { 00:16:48.324 "trtype": "TCP", 00:16:48.324 "adrfam": "IPv4", 00:16:48.324 "traddr": "10.0.0.2", 00:16:48.324 "trsvcid": "4420" 00:16:48.324 }, 00:16:48.324 "peer_address": { 00:16:48.324 "trtype": "TCP", 00:16:48.324 "adrfam": "IPv4", 00:16:48.324 "traddr": "10.0.0.1", 00:16:48.324 "trsvcid": "49142" 00:16:48.324 }, 00:16:48.324 "auth": { 00:16:48.324 "state": "completed", 00:16:48.324 "digest": "sha512", 00:16:48.324 "dhgroup": "ffdhe3072" 00:16:48.324 } 00:16:48.324 } 00:16:48.324 ]' 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.324 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.583 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:48.583 12:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.151 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.411 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.669 00:16:49.669 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.669 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.669 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.927 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.927 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.927 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.927 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.928 { 00:16:49.928 "cntlid": 121, 00:16:49.928 "qid": 0, 00:16:49.928 "state": "enabled", 00:16:49.928 "thread": "nvmf_tgt_poll_group_000", 00:16:49.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.928 "listen_address": { 00:16:49.928 "trtype": "TCP", 00:16:49.928 "adrfam": "IPv4", 00:16:49.928 "traddr": "10.0.0.2", 00:16:49.928 "trsvcid": "4420" 00:16:49.928 }, 00:16:49.928 "peer_address": { 00:16:49.928 "trtype": "TCP", 00:16:49.928 "adrfam": "IPv4", 00:16:49.928 "traddr": "10.0.0.1", 00:16:49.928 "trsvcid": "49174" 00:16:49.928 }, 00:16:49.928 "auth": { 00:16:49.928 "state": "completed", 00:16:49.928 "digest": "sha512", 00:16:49.928 "dhgroup": "ffdhe4096" 00:16:49.928 } 00:16:49.928 } 00:16:49.928 ]' 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.928 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.186 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:50.186 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.754 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.013 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.271 00:16:51.271 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.271 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.271 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.531 { 00:16:51.531 "cntlid": 123, 00:16:51.531 "qid": 0, 00:16:51.531 "state": "enabled", 00:16:51.531 "thread": "nvmf_tgt_poll_group_000", 00:16:51.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.531 "listen_address": { 00:16:51.531 "trtype": "TCP", 00:16:51.531 "adrfam": "IPv4", 00:16:51.531 "traddr": "10.0.0.2", 00:16:51.531 "trsvcid": "4420" 00:16:51.531 }, 00:16:51.531 "peer_address": { 00:16:51.531 "trtype": "TCP", 00:16:51.531 "adrfam": "IPv4", 00:16:51.531 "traddr": "10.0.0.1", 00:16:51.531 "trsvcid": "49202" 00:16:51.531 }, 00:16:51.531 "auth": { 00:16:51.531 "state": "completed", 00:16:51.531 "digest": "sha512", 00:16:51.531 "dhgroup": "ffdhe4096" 00:16:51.531 } 00:16:51.531 } 00:16:51.531 ]' 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.531 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.809 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:51.809 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.376 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.635 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.894 00:16:52.894 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.894 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.894 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.894 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.154 { 00:16:53.154 "cntlid": 125, 00:16:53.154 "qid": 0, 00:16:53.154 "state": "enabled", 00:16:53.154 "thread": "nvmf_tgt_poll_group_000", 00:16:53.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.154 "listen_address": { 00:16:53.154 "trtype": "TCP", 00:16:53.154 "adrfam": "IPv4", 00:16:53.154 "traddr": "10.0.0.2", 00:16:53.154 "trsvcid": "4420" 00:16:53.154 }, 00:16:53.154 "peer_address": { 00:16:53.154 "trtype": "TCP", 00:16:53.154 "adrfam": "IPv4", 00:16:53.154 "traddr": "10.0.0.1", 00:16:53.154 "trsvcid": "49222" 00:16:53.154 }, 00:16:53.154 "auth": { 00:16:53.154 "state": "completed", 00:16:53.154 "digest": "sha512", 00:16:53.154 "dhgroup": "ffdhe4096" 00:16:53.154 } 00:16:53.154 } 00:16:53.154 ]' 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.154 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.412 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:53.412 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.978 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.237 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.496 00:16:54.496 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.496 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.496 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.754 { 00:16:54.754 "cntlid": 127, 00:16:54.754 "qid": 0, 00:16:54.754 "state": "enabled", 00:16:54.754 "thread": "nvmf_tgt_poll_group_000", 00:16:54.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.754 "listen_address": { 00:16:54.754 "trtype": "TCP", 00:16:54.754 "adrfam": "IPv4", 00:16:54.754 "traddr": "10.0.0.2", 00:16:54.754 "trsvcid": "4420" 00:16:54.754 }, 00:16:54.754 "peer_address": { 00:16:54.754 "trtype": "TCP", 00:16:54.754 "adrfam": "IPv4", 00:16:54.754 "traddr": "10.0.0.1", 00:16:54.754 "trsvcid": "40506" 00:16:54.754 }, 00:16:54.754 "auth": { 00:16:54.754 "state": "completed", 00:16:54.754 "digest": "sha512", 00:16:54.754 "dhgroup": "ffdhe4096" 00:16:54.754 } 00:16:54.754 } 00:16:54.754 ]' 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.754 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.012 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:55.012 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.578 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.837 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.095 00:16:56.095 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.095 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.095 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.353 { 00:16:56.353 "cntlid": 129, 00:16:56.353 "qid": 0, 00:16:56.353 "state": "enabled", 00:16:56.353 "thread": "nvmf_tgt_poll_group_000", 00:16:56.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.353 "listen_address": { 00:16:56.353 "trtype": "TCP", 00:16:56.353 "adrfam": "IPv4", 00:16:56.353 "traddr": "10.0.0.2", 00:16:56.353 "trsvcid": "4420" 00:16:56.353 }, 00:16:56.353 "peer_address": { 00:16:56.353 "trtype": "TCP", 00:16:56.353 "adrfam": "IPv4", 00:16:56.353 "traddr": "10.0.0.1", 00:16:56.353 "trsvcid": "40544" 00:16:56.353 }, 00:16:56.353 "auth": { 00:16:56.353 "state": "completed", 00:16:56.353 "digest": "sha512", 00:16:56.353 "dhgroup": "ffdhe6144" 00:16:56.353 } 00:16:56.353 } 00:16:56.353 ]' 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.353 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.612 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:56.612 12:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.181 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.440 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:57.440 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.440 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.440 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:57.440 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.441 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.700 00:16:57.700 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.700 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.700 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.959 { 00:16:57.959 "cntlid": 131, 00:16:57.959 "qid": 0, 00:16:57.959 "state": "enabled", 00:16:57.959 "thread": "nvmf_tgt_poll_group_000", 00:16:57.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.959 "listen_address": { 00:16:57.959 "trtype": "TCP", 00:16:57.959 "adrfam": "IPv4", 00:16:57.959 "traddr": "10.0.0.2", 00:16:57.959 "trsvcid": "4420" 00:16:57.959 }, 00:16:57.959 "peer_address": { 00:16:57.959 "trtype": "TCP", 00:16:57.959 "adrfam": "IPv4", 00:16:57.959 "traddr": "10.0.0.1", 00:16:57.959 "trsvcid": "40560" 00:16:57.959 }, 00:16:57.959 "auth": { 00:16:57.959 "state": "completed", 00:16:57.959 "digest": "sha512", 00:16:57.959 "dhgroup": "ffdhe6144" 00:16:57.959 } 00:16:57.959 } 00:16:57.959 ]' 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.959 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.218 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.218 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.218 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.218 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:58.218 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:16:58.785 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.786 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.045 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.305 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.564 { 00:16:59.564 "cntlid": 133, 00:16:59.564 "qid": 0, 00:16:59.564 "state": "enabled", 00:16:59.564 "thread": "nvmf_tgt_poll_group_000", 00:16:59.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.564 "listen_address": { 00:16:59.564 "trtype": "TCP", 00:16:59.564 "adrfam": "IPv4", 00:16:59.564 "traddr": "10.0.0.2", 00:16:59.564 "trsvcid": "4420" 00:16:59.564 }, 00:16:59.564 "peer_address": { 00:16:59.564 "trtype": "TCP", 00:16:59.564 "adrfam": "IPv4", 00:16:59.564 "traddr": "10.0.0.1", 00:16:59.564 "trsvcid": "40596" 00:16:59.564 }, 00:16:59.564 "auth": { 00:16:59.564 "state": "completed", 00:16:59.564 "digest": "sha512", 00:16:59.564 "dhgroup": "ffdhe6144" 00:16:59.564 } 00:16:59.564 } 00:16:59.564 ]' 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.564 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.822 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.822 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.822 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.822 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.823 12:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.082 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:17:00.082 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.650 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.217 00:17:01.217 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.218 { 00:17:01.218 "cntlid": 135, 00:17:01.218 "qid": 0, 00:17:01.218 "state": "enabled", 00:17:01.218 "thread": "nvmf_tgt_poll_group_000", 00:17:01.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.218 "listen_address": { 00:17:01.218 "trtype": "TCP", 00:17:01.218 "adrfam": "IPv4", 00:17:01.218 "traddr": "10.0.0.2", 00:17:01.218 "trsvcid": "4420" 00:17:01.218 }, 00:17:01.218 "peer_address": { 00:17:01.218 "trtype": "TCP", 00:17:01.218 "adrfam": "IPv4", 00:17:01.218 "traddr": "10.0.0.1", 00:17:01.218 "trsvcid": "40618" 00:17:01.218 }, 00:17:01.218 "auth": { 00:17:01.218 "state": "completed", 00:17:01.218 "digest": "sha512", 00:17:01.218 "dhgroup": "ffdhe6144" 00:17:01.218 } 00:17:01.218 } 00:17:01.218 ]' 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.218 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:01.476 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:02.044 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.044 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.044 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.044 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.302 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.869 00:17:02.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.128 { 00:17:03.128 "cntlid": 137, 00:17:03.128 "qid": 0, 00:17:03.128 "state": "enabled", 00:17:03.128 "thread": "nvmf_tgt_poll_group_000", 00:17:03.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:03.128 "listen_address": { 00:17:03.128 "trtype": "TCP", 00:17:03.128 "adrfam": "IPv4", 00:17:03.128 "traddr": "10.0.0.2", 00:17:03.128 "trsvcid": "4420" 00:17:03.128 }, 00:17:03.128 "peer_address": { 00:17:03.128 "trtype": "TCP", 00:17:03.128 "adrfam": "IPv4", 00:17:03.128 "traddr": "10.0.0.1", 00:17:03.128 "trsvcid": "40654" 00:17:03.128 }, 00:17:03.128 "auth": { 00:17:03.128 "state": "completed", 00:17:03.128 "digest": "sha512", 00:17:03.128 "dhgroup": "ffdhe8192" 00:17:03.128 } 00:17:03.128 } 00:17:03.128 ]' 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.128 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.387 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:17:03.387 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.954 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.955 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.955 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.213 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.780 00:17:04.780 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.780 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.780 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.780 { 00:17:04.780 "cntlid": 139, 00:17:04.780 "qid": 0, 00:17:04.780 "state": "enabled", 00:17:04.780 "thread": "nvmf_tgt_poll_group_000", 00:17:04.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.780 "listen_address": { 00:17:04.780 "trtype": "TCP", 00:17:04.780 "adrfam": "IPv4", 00:17:04.780 "traddr": "10.0.0.2", 00:17:04.780 "trsvcid": "4420" 00:17:04.780 }, 00:17:04.780 "peer_address": { 00:17:04.780 "trtype": "TCP", 00:17:04.780 "adrfam": "IPv4", 00:17:04.780 "traddr": "10.0.0.1", 00:17:04.780 "trsvcid": "51664" 00:17:04.780 }, 00:17:04.780 "auth": { 00:17:04.780 "state": "completed", 00:17:04.780 "digest": "sha512", 00:17:04.780 "dhgroup": "ffdhe8192" 00:17:04.780 } 00:17:04.780 } 00:17:04.780 ]' 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.780 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.039 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.039 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.039 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.039 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.039 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.297 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:17:05.297 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: --dhchap-ctrl-secret DHHC-1:02:Y2Q5YTg5MWZhYjc0YzRjNjY4OTYwNDFhZDA3MTVhNmEwMTE4MDhmYTc2ZGQ3ZjRmx4Ockg==: 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.862 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.863 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.863 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.863 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.430 00:17:06.430 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.430 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.430 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.688 { 00:17:06.688 "cntlid": 141, 00:17:06.688 "qid": 0, 00:17:06.688 "state": "enabled", 00:17:06.688 "thread": "nvmf_tgt_poll_group_000", 00:17:06.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.688 "listen_address": { 00:17:06.688 "trtype": "TCP", 00:17:06.688 "adrfam": "IPv4", 00:17:06.688 "traddr": "10.0.0.2", 00:17:06.688 "trsvcid": "4420" 00:17:06.688 }, 00:17:06.688 "peer_address": { 00:17:06.688 "trtype": "TCP", 00:17:06.688 "adrfam": "IPv4", 00:17:06.688 "traddr": "10.0.0.1", 00:17:06.688 "trsvcid": "51690" 00:17:06.688 }, 00:17:06.688 "auth": { 00:17:06.688 "state": "completed", 00:17:06.688 "digest": "sha512", 00:17:06.688 "dhgroup": "ffdhe8192" 00:17:06.688 } 00:17:06.688 } 00:17:06.688 ]' 00:17:06.688 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.689 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.947 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:17:06.947 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:01:NTM3NzMzNTA2MDIxNTgzYmQ0YjA4ZmY5Mjk0Y2U5NzLZIzHg: 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.514 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.515 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.515 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.779 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.377 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.377 { 00:17:08.377 "cntlid": 143, 00:17:08.377 "qid": 0, 00:17:08.377 "state": "enabled", 00:17:08.377 "thread": "nvmf_tgt_poll_group_000", 00:17:08.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.377 "listen_address": { 00:17:08.377 "trtype": "TCP", 00:17:08.377 "adrfam": "IPv4", 00:17:08.377 "traddr": "10.0.0.2", 00:17:08.377 "trsvcid": "4420" 00:17:08.377 }, 00:17:08.377 "peer_address": { 00:17:08.377 "trtype": "TCP", 00:17:08.377 "adrfam": "IPv4", 00:17:08.377 "traddr": "10.0.0.1", 00:17:08.377 "trsvcid": "51714" 00:17:08.377 }, 00:17:08.377 "auth": { 00:17:08.377 "state": "completed", 00:17:08.377 "digest": "sha512", 00:17:08.377 "dhgroup": "ffdhe8192" 00:17:08.377 } 00:17:08.377 } 00:17:08.377 ]' 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.377 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.673 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:08.674 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.241 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.500 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.067 00:17:10.067 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.067 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.067 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.327 { 00:17:10.327 "cntlid": 145, 00:17:10.327 "qid": 0, 00:17:10.327 "state": "enabled", 00:17:10.327 "thread": "nvmf_tgt_poll_group_000", 00:17:10.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.327 "listen_address": { 00:17:10.327 "trtype": "TCP", 00:17:10.327 "adrfam": "IPv4", 00:17:10.327 "traddr": "10.0.0.2", 00:17:10.327 "trsvcid": "4420" 00:17:10.327 }, 00:17:10.327 "peer_address": { 00:17:10.327 "trtype": "TCP", 00:17:10.327 "adrfam": "IPv4", 00:17:10.327 "traddr": "10.0.0.1", 00:17:10.327 "trsvcid": "51738" 00:17:10.327 }, 00:17:10.327 "auth": { 00:17:10.327 "state": "completed", 00:17:10.327 "digest": "sha512", 00:17:10.327 "dhgroup": "ffdhe8192" 00:17:10.327 } 00:17:10.327 } 00:17:10.327 ]' 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.327 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.586 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:17:10.587 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmI1ODRiODdhNDMyNmIzNTIyNmNiYjI5ZTQ1NzkxZmUzNjJlNzJjM2JmMjc2NjQ0aspUEw==: --dhchap-ctrl-secret DHHC-1:03:M2QwYmMxOTBhMzZlNjc5ZTk1MzdlMzk4YjI2ODhmMDZiYjcwNzcxNDhlYTJiMjAwZGY1ZjdkYjE4ZWU2MWI2OSSZ+U4=: 00:17:11.154 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.154 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.154 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.154 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:11.155 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:11.723 request: 00:17:11.723 { 00:17:11.723 "name": "nvme0", 00:17:11.723 "trtype": "tcp", 00:17:11.723 "traddr": "10.0.0.2", 00:17:11.723 "adrfam": "ipv4", 00:17:11.723 "trsvcid": "4420", 00:17:11.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.723 "prchk_reftag": false, 00:17:11.723 "prchk_guard": false, 00:17:11.723 "hdgst": false, 00:17:11.723 "ddgst": false, 00:17:11.723 "dhchap_key": "key2", 00:17:11.723 "allow_unrecognized_csi": false, 00:17:11.723 "method": "bdev_nvme_attach_controller", 00:17:11.723 "req_id": 1 00:17:11.723 } 00:17:11.723 Got JSON-RPC error response 00:17:11.723 response: 00:17:11.723 { 00:17:11.723 "code": -5, 00:17:11.723 "message": "Input/output error" 00:17:11.723 } 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.723 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:11.982 request: 00:17:11.982 { 00:17:11.982 "name": "nvme0", 00:17:11.982 "trtype": "tcp", 00:17:11.982 "traddr": "10.0.0.2", 00:17:11.982 "adrfam": "ipv4", 00:17:11.982 "trsvcid": "4420", 00:17:11.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.982 "prchk_reftag": false, 00:17:11.982 "prchk_guard": false, 00:17:11.982 "hdgst": false, 00:17:11.982 "ddgst": false, 00:17:11.982 "dhchap_key": "key1", 00:17:11.982 "dhchap_ctrlr_key": "ckey2", 00:17:11.982 "allow_unrecognized_csi": false, 00:17:11.982 "method": "bdev_nvme_attach_controller", 00:17:11.982 "req_id": 1 00:17:11.982 } 00:17:11.982 Got JSON-RPC error response 00:17:11.982 response: 00:17:11.982 { 00:17:11.982 "code": -5, 00:17:11.982 "message": "Input/output error" 00:17:11.982 } 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.982 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.551 request: 00:17:12.551 { 00:17:12.551 "name": "nvme0", 00:17:12.551 "trtype": "tcp", 00:17:12.551 "traddr": "10.0.0.2", 00:17:12.551 "adrfam": "ipv4", 00:17:12.551 "trsvcid": "4420", 00:17:12.551 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.551 "prchk_reftag": false, 00:17:12.551 "prchk_guard": false, 00:17:12.551 "hdgst": false, 00:17:12.551 "ddgst": false, 00:17:12.551 "dhchap_key": "key1", 00:17:12.551 "dhchap_ctrlr_key": "ckey1", 00:17:12.551 "allow_unrecognized_csi": false, 00:17:12.551 "method": "bdev_nvme_attach_controller", 00:17:12.551 "req_id": 1 00:17:12.551 } 00:17:12.551 Got JSON-RPC error response 00:17:12.551 response: 00:17:12.551 { 00:17:12.551 "code": -5, 00:17:12.551 "message": "Input/output error" 00:17:12.551 } 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1197300 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1197300 ']' 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1197300 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1197300 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1197300' 00:17:12.551 killing process with pid 1197300 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1197300 00:17:12.551 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1197300 00:17:12.810 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:12.810 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1219301 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1219301 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1219301 ']' 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.811 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1219301 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1219301 ']' 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.070 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 null0 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.C5Y 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.flG ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.flG 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HzU 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.8yL ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8yL 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ANo 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.San ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.San 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qWX 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.330 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.267 nvme0n1 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.267 { 00:17:14.267 "cntlid": 1, 00:17:14.267 "qid": 0, 00:17:14.267 "state": "enabled", 00:17:14.267 "thread": "nvmf_tgt_poll_group_000", 00:17:14.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.267 "listen_address": { 00:17:14.267 "trtype": "TCP", 00:17:14.267 "adrfam": "IPv4", 00:17:14.267 "traddr": "10.0.0.2", 00:17:14.267 "trsvcid": "4420" 00:17:14.267 }, 00:17:14.267 "peer_address": { 00:17:14.267 "trtype": "TCP", 00:17:14.267 "adrfam": "IPv4", 00:17:14.267 "traddr": "10.0.0.1", 00:17:14.267 "trsvcid": "46626" 00:17:14.267 }, 00:17:14.267 "auth": { 00:17:14.267 "state": "completed", 00:17:14.267 "digest": "sha512", 00:17:14.267 "dhgroup": "ffdhe8192" 00:17:14.267 } 00:17:14.267 } 00:17:14.267 ]' 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.267 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.525 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.525 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.525 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.525 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.525 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.784 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:14.784 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.351 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.610 request: 00:17:15.610 { 00:17:15.610 "name": "nvme0", 00:17:15.610 "trtype": "tcp", 00:17:15.610 "traddr": "10.0.0.2", 00:17:15.610 "adrfam": "ipv4", 00:17:15.610 "trsvcid": "4420", 00:17:15.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.610 "prchk_reftag": false, 00:17:15.610 "prchk_guard": false, 00:17:15.610 "hdgst": false, 00:17:15.610 "ddgst": false, 00:17:15.610 "dhchap_key": "key3", 00:17:15.610 "allow_unrecognized_csi": false, 00:17:15.610 "method": "bdev_nvme_attach_controller", 00:17:15.610 "req_id": 1 00:17:15.610 } 00:17:15.610 Got JSON-RPC error response 00:17:15.610 response: 00:17:15.610 { 00:17:15.610 "code": -5, 00:17:15.610 "message": "Input/output error" 00:17:15.610 } 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:15.610 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.869 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.128 request: 00:17:16.128 { 00:17:16.128 "name": "nvme0", 00:17:16.128 "trtype": "tcp", 00:17:16.128 "traddr": "10.0.0.2", 00:17:16.128 "adrfam": "ipv4", 00:17:16.128 "trsvcid": "4420", 00:17:16.128 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.128 "prchk_reftag": false, 00:17:16.128 "prchk_guard": false, 00:17:16.128 "hdgst": false, 00:17:16.128 "ddgst": false, 00:17:16.128 "dhchap_key": "key3", 00:17:16.128 "allow_unrecognized_csi": false, 00:17:16.128 "method": "bdev_nvme_attach_controller", 00:17:16.128 "req_id": 1 00:17:16.128 } 00:17:16.128 Got JSON-RPC error response 00:17:16.128 response: 00:17:16.128 { 00:17:16.128 "code": -5, 00:17:16.128 "message": "Input/output error" 00:17:16.128 } 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.128 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.387 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.646 request: 00:17:16.646 { 00:17:16.646 "name": "nvme0", 00:17:16.646 "trtype": "tcp", 00:17:16.646 "traddr": "10.0.0.2", 00:17:16.646 "adrfam": "ipv4", 00:17:16.646 "trsvcid": "4420", 00:17:16.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.646 "prchk_reftag": false, 00:17:16.646 "prchk_guard": false, 00:17:16.646 "hdgst": false, 00:17:16.646 "ddgst": false, 00:17:16.646 "dhchap_key": "key0", 00:17:16.646 "dhchap_ctrlr_key": "key1", 00:17:16.646 "allow_unrecognized_csi": false, 00:17:16.646 "method": "bdev_nvme_attach_controller", 00:17:16.646 "req_id": 1 00:17:16.646 } 00:17:16.646 Got JSON-RPC error response 00:17:16.646 response: 00:17:16.646 { 00:17:16.646 "code": -5, 00:17:16.646 "message": "Input/output error" 00:17:16.646 } 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:16.646 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:16.905 nvme0n1 00:17:16.905 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:16.905 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:16.905 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.164 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.164 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.164 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.423 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.989 nvme0n1 00:17:17.989 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:17.989 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:17.989 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:18.248 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.507 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.507 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:18.507 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: --dhchap-ctrl-secret DHHC-1:03:YTA3MmQ2YTA4NWFjZGIwZTg1Mzg5ZWJlMTM3MGY4ZTk3ZTIxOTVhMmU5MDhlMzAwZDM1ZDBmMWQxNDU5NzQzNZLJ8K0=: 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.074 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:19.332 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:19.591 request: 00:17:19.591 { 00:17:19.591 "name": "nvme0", 00:17:19.591 "trtype": "tcp", 00:17:19.591 "traddr": "10.0.0.2", 00:17:19.591 "adrfam": "ipv4", 00:17:19.591 "trsvcid": "4420", 00:17:19.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.591 "prchk_reftag": false, 00:17:19.591 "prchk_guard": false, 00:17:19.591 "hdgst": false, 00:17:19.591 "ddgst": false, 00:17:19.591 "dhchap_key": "key1", 00:17:19.591 "allow_unrecognized_csi": false, 00:17:19.591 "method": "bdev_nvme_attach_controller", 00:17:19.591 "req_id": 1 00:17:19.591 } 00:17:19.591 Got JSON-RPC error response 00:17:19.591 response: 00:17:19.591 { 00:17:19.591 "code": -5, 00:17:19.591 "message": "Input/output error" 00:17:19.591 } 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.591 12:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.528 nvme0n1 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.528 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.787 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.787 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.787 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.787 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.787 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:20.787 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:20.787 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:21.045 nvme0n1 00:17:21.045 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:21.045 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:21.045 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.304 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.304 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.304 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: '' 2s 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: ]] 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGY2OWRlODQ5MTFiY2FiMDFjODA2YWUyNTFiZmU2ZTW5Ijqa: 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:21.563 12:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: 2s 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: ]] 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTNmOTZjYTUxY2FkOGY1YjY1YzYxMGQ0MTQ5MjA3Mjg3ODMzMzEzOGQxNTEwNGRmBVyeKQ==: 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:23.467 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.002 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.260 nvme0n1 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.260 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.828 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:26.828 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:26.828 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:27.087 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.346 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:27.913 request: 00:17:27.913 { 00:17:27.913 "name": "nvme0", 00:17:27.913 "dhchap_key": "key1", 00:17:27.913 "dhchap_ctrlr_key": "key3", 00:17:27.913 "method": "bdev_nvme_set_keys", 00:17:27.913 "req_id": 1 00:17:27.913 } 00:17:27.913 Got JSON-RPC error response 00:17:27.913 response: 00:17:27.913 { 00:17:27.913 "code": -13, 00:17:27.913 "message": "Permission denied" 00:17:27.913 } 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:27.913 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.171 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:28.171 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:29.106 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:29.106 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:29.106 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.365 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.933 nvme0n1 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.192 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:30.450 request: 00:17:30.450 { 00:17:30.450 "name": "nvme0", 00:17:30.450 "dhchap_key": "key2", 00:17:30.450 "dhchap_ctrlr_key": "key0", 00:17:30.450 "method": "bdev_nvme_set_keys", 00:17:30.450 "req_id": 1 00:17:30.450 } 00:17:30.450 Got JSON-RPC error response 00:17:30.450 response: 00:17:30.450 { 00:17:30.450 "code": -13, 00:17:30.450 "message": "Permission denied" 00:17:30.450 } 00:17:30.450 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:30.450 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.450 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:30.708 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:32.086 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:32.086 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:32.086 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1197457 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1197457 ']' 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1197457 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1197457 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1197457' 00:17:32.086 killing process with pid 1197457 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1197457 00:17:32.086 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1197457 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.345 rmmod nvme_tcp 00:17:32.345 rmmod nvme_fabrics 00:17:32.345 rmmod nvme_keyring 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1219301 ']' 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1219301 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1219301 ']' 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1219301 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1219301 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1219301' 00:17:32.345 killing process with pid 1219301 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1219301 00:17:32.345 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1219301 00:17:32.604 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.605 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.C5Y /tmp/spdk.key-sha256.HzU /tmp/spdk.key-sha384.ANo /tmp/spdk.key-sha512.qWX /tmp/spdk.key-sha512.flG /tmp/spdk.key-sha384.8yL /tmp/spdk.key-sha256.San '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:35.141 00:17:35.141 real 2m31.309s 00:17:35.141 user 5m48.570s 00:17:35.141 sys 0m24.240s 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.141 ************************************ 00:17:35.141 END TEST nvmf_auth_target 00:17:35.141 ************************************ 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.141 ************************************ 00:17:35.141 START TEST nvmf_bdevio_no_huge 00:17:35.141 ************************************ 00:17:35.141 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:35.141 * Looking for test storage... 00:17:35.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.141 --rc genhtml_branch_coverage=1 00:17:35.141 --rc genhtml_function_coverage=1 00:17:35.141 --rc genhtml_legend=1 00:17:35.141 --rc geninfo_all_blocks=1 00:17:35.141 --rc geninfo_unexecuted_blocks=1 00:17:35.141 00:17:35.141 ' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.141 --rc genhtml_branch_coverage=1 00:17:35.141 --rc genhtml_function_coverage=1 00:17:35.141 --rc genhtml_legend=1 00:17:35.141 --rc geninfo_all_blocks=1 00:17:35.141 --rc geninfo_unexecuted_blocks=1 00:17:35.141 00:17:35.141 ' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.141 --rc genhtml_branch_coverage=1 00:17:35.141 --rc genhtml_function_coverage=1 00:17:35.141 --rc genhtml_legend=1 00:17:35.141 --rc geninfo_all_blocks=1 00:17:35.141 --rc geninfo_unexecuted_blocks=1 00:17:35.141 00:17:35.141 ' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.141 --rc genhtml_branch_coverage=1 00:17:35.141 --rc genhtml_function_coverage=1 00:17:35.141 --rc genhtml_legend=1 00:17:35.141 --rc geninfo_all_blocks=1 00:17:35.141 --rc geninfo_unexecuted_blocks=1 00:17:35.141 00:17:35.141 ' 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.141 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.142 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.710 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:41.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:41.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:41.711 Found net devices under 0000:86:00.0: cvl_0_0 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:41.711 Found net devices under 0000:86:00.1: cvl_0_1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.711 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:17:41.711 00:17:41.711 --- 10.0.0.2 ping statistics --- 00:17:41.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.711 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:41.711 00:17:41.711 --- 10.0.0.1 ping statistics --- 00:17:41.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.711 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1226181 00:17:41.711 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1226181 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1226181 ']' 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 [2024-10-15 12:58:01.203412] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:17:41.712 [2024-10-15 12:58:01.203462] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:41.712 [2024-10-15 12:58:01.283828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.712 [2024-10-15 12:58:01.330125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.712 [2024-10-15 12:58:01.330156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.712 [2024-10-15 12:58:01.330164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.712 [2024-10-15 12:58:01.330169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.712 [2024-10-15 12:58:01.330175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.712 [2024-10-15 12:58:01.331372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.712 [2024-10-15 12:58:01.331395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:41.712 [2024-10-15 12:58:01.331484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.712 [2024-10-15 12:58:01.331485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 [2024-10-15 12:58:01.475146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 Malloc0 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.712 [2024-10-15 12:58:01.519452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:41.712 { 00:17:41.712 "params": { 00:17:41.712 "name": "Nvme$subsystem", 00:17:41.712 "trtype": "$TEST_TRANSPORT", 00:17:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:41.712 "adrfam": "ipv4", 00:17:41.712 "trsvcid": "$NVMF_PORT", 00:17:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:41.712 "hdgst": ${hdgst:-false}, 00:17:41.712 "ddgst": ${ddgst:-false} 00:17:41.712 }, 00:17:41.712 "method": "bdev_nvme_attach_controller" 00:17:41.712 } 00:17:41.712 EOF 00:17:41.712 )") 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:17:41.712 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:41.712 "params": { 00:17:41.712 "name": "Nvme1", 00:17:41.712 "trtype": "tcp", 00:17:41.712 "traddr": "10.0.0.2", 00:17:41.712 "adrfam": "ipv4", 00:17:41.712 "trsvcid": "4420", 00:17:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.712 "hdgst": false, 00:17:41.712 "ddgst": false 00:17:41.712 }, 00:17:41.712 "method": "bdev_nvme_attach_controller" 00:17:41.712 }' 00:17:41.712 [2024-10-15 12:58:01.569720] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:17:41.712 [2024-10-15 12:58:01.569767] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1226212 ] 00:17:41.712 [2024-10-15 12:58:01.640057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.712 [2024-10-15 12:58:01.687834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.712 [2024-10-15 12:58:01.687964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.712 [2024-10-15 12:58:01.687965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.712 I/O targets: 00:17:41.712 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:41.712 00:17:41.712 00:17:41.712 CUnit - A unit testing framework for C - Version 2.1-3 00:17:41.712 http://cunit.sourceforge.net/ 00:17:41.712 00:17:41.712 00:17:41.712 Suite: bdevio tests on: Nvme1n1 00:17:41.712 Test: blockdev write read block ...passed 00:17:41.712 Test: blockdev write zeroes read block ...passed 00:17:41.712 Test: blockdev write zeroes read no split ...passed 00:17:41.712 Test: blockdev write zeroes read split ...passed 00:17:41.971 Test: blockdev write zeroes read split partial ...passed 00:17:41.971 Test: blockdev reset ...[2024-10-15 12:58:02.054416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.971 [2024-10-15 12:58:02.054477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72ea20 (9): Bad file descriptor 00:17:41.971 [2024-10-15 12:58:02.107574] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.971 passed 00:17:41.971 Test: blockdev write read 8 blocks ...passed 00:17:41.971 Test: blockdev write read size > 128k ...passed 00:17:41.971 Test: blockdev write read invalid size ...passed 00:17:41.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.971 Test: blockdev write read max offset ...passed 00:17:41.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.971 Test: blockdev writev readv 8 blocks ...passed 00:17:41.971 Test: blockdev writev readv 30 x 1block ...passed 00:17:42.231 Test: blockdev writev readv block ...passed 00:17:42.231 Test: blockdev writev readv size > 128k ...passed 00:17:42.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:42.231 Test: blockdev comparev and writev ...[2024-10-15 12:58:02.323394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.323437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.323680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.323706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.323975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.323982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.324218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.324228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.324239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.231 [2024-10-15 12:58:02.324246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:42.231 passed 00:17:42.231 Test: blockdev nvme passthru rw ...passed 00:17:42.231 Test: blockdev nvme passthru vendor specific ...[2024-10-15 12:58:02.406900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.231 [2024-10-15 12:58:02.406916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.407018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.231 [2024-10-15 12:58:02.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.407128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.231 [2024-10-15 12:58:02.407137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:42.231 [2024-10-15 12:58:02.407236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.231 [2024-10-15 12:58:02.407245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.231 passed 00:17:42.231 Test: blockdev nvme admin passthru ...passed 00:17:42.231 Test: blockdev copy ...passed 00:17:42.231 00:17:42.231 Run Summary: Type Total Ran Passed Failed Inactive 00:17:42.231 suites 1 1 n/a 0 0 00:17:42.231 tests 23 23 23 0 0 00:17:42.231 asserts 152 152 152 0 n/a 00:17:42.231 00:17:42.231 Elapsed time = 1.244 seconds 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.491 rmmod nvme_tcp 00:17:42.491 rmmod nvme_fabrics 00:17:42.491 rmmod nvme_keyring 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1226181 ']' 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1226181 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1226181 ']' 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1226181 00:17:42.491 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1226181 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1226181' 00:17:42.750 killing process with pid 1226181 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1226181 00:17:42.750 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1226181 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.009 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.915 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.915 00:17:44.915 real 0m10.293s 00:17:44.915 user 0m10.826s 00:17:44.915 sys 0m5.422s 00:17:44.915 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.915 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.915 ************************************ 00:17:44.915 END TEST nvmf_bdevio_no_huge 00:17:44.915 ************************************ 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.175 ************************************ 00:17:45.175 START TEST nvmf_tls 00:17:45.175 ************************************ 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.175 * Looking for test storage... 00:17:45.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.175 --rc genhtml_branch_coverage=1 00:17:45.175 --rc genhtml_function_coverage=1 00:17:45.175 --rc genhtml_legend=1 00:17:45.175 --rc geninfo_all_blocks=1 00:17:45.175 --rc geninfo_unexecuted_blocks=1 00:17:45.175 00:17:45.175 ' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.175 --rc genhtml_branch_coverage=1 00:17:45.175 --rc genhtml_function_coverage=1 00:17:45.175 --rc genhtml_legend=1 00:17:45.175 --rc geninfo_all_blocks=1 00:17:45.175 --rc geninfo_unexecuted_blocks=1 00:17:45.175 00:17:45.175 ' 00:17:45.175 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.175 --rc genhtml_branch_coverage=1 00:17:45.175 --rc genhtml_function_coverage=1 00:17:45.175 --rc genhtml_legend=1 00:17:45.175 --rc geninfo_all_blocks=1 00:17:45.176 --rc geninfo_unexecuted_blocks=1 00:17:45.176 00:17:45.176 ' 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:45.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.176 --rc genhtml_branch_coverage=1 00:17:45.176 --rc genhtml_function_coverage=1 00:17:45.176 --rc genhtml_legend=1 00:17:45.176 --rc geninfo_all_blocks=1 00:17:45.176 --rc geninfo_unexecuted_blocks=1 00:17:45.176 00:17:45.176 ' 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.176 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.435 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:45.436 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.009 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:52.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:52.010 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:52.010 Found net devices under 0000:86:00.0: cvl_0_0 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:52.010 Found net devices under 0000:86:00.1: cvl_0_1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:52.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:17:52.010 00:17:52.010 --- 10.0.0.2 ping statistics --- 00:17:52.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.010 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:17:52.010 00:17:52.010 --- 10.0.0.1 ping statistics --- 00:17:52.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.010 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1229969 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1229969 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1229969 ']' 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.010 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.010 [2024-10-15 12:58:11.542360] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:17:52.010 [2024-10-15 12:58:11.542409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.010 [2024-10-15 12:58:11.618317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.010 [2024-10-15 12:58:11.659616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.010 [2024-10-15 12:58:11.659646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.010 [2024-10-15 12:58:11.659653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.010 [2024-10-15 12:58:11.659659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.011 [2024-10-15 12:58:11.659665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.011 [2024-10-15 12:58:11.660217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:52.011 true 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.011 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:52.011 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:52.011 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:52.011 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:52.011 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.011 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:52.270 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:52.270 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:52.270 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:52.528 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.528 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:52.787 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:52.787 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:52.787 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.787 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:52.787 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:52.787 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:52.787 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:53.046 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.046 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:53.305 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:53.305 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:53.305 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:17:53.565 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3jyfDdop48 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.eivpGCQbYX 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3jyfDdop48 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.eivpGCQbYX 00:17:53.824 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:53.824 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:54.084 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3jyfDdop48 00:17:54.084 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3jyfDdop48 00:17:54.084 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.343 [2024-10-15 12:58:14.505486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.343 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.602 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.602 [2024-10-15 12:58:14.862383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.602 [2024-10-15 12:58:14.862657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.602 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.860 malloc0 00:17:54.861 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.119 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3jyfDdop48 00:17:55.119 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:55.378 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3jyfDdop48 00:18:05.563 Initializing NVMe Controllers 00:18:05.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.563 Initialization complete. Launching workers. 00:18:05.563 ======================================================== 00:18:05.563 Latency(us) 00:18:05.563 Device Information : IOPS MiB/s Average min max 00:18:05.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16883.10 65.95 3790.81 1048.79 4543.57 00:18:05.563 ======================================================== 00:18:05.563 Total : 16883.10 65.95 3790.81 1048.79 4543.57 00:18:05.563 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3jyfDdop48 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3jyfDdop48 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1232335 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1232335 /var/tmp/bdevperf.sock 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1232335 ']' 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.563 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.563 [2024-10-15 12:58:25.758042] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:05.563 [2024-10-15 12:58:25.758088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232335 ] 00:18:05.563 [2024-10-15 12:58:25.825610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.563 [2024-10-15 12:58:25.867245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.822 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.822 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:05.822 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3jyfDdop48 00:18:05.822 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.080 [2024-10-15 12:58:26.313179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.080 TLSTESTn1 00:18:06.080 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.338 Running I/O for 10 seconds... 00:18:08.206 5431.00 IOPS, 21.21 MiB/s [2024-10-15T10:58:29.903Z] 5608.00 IOPS, 21.91 MiB/s [2024-10-15T10:58:30.841Z] 5618.67 IOPS, 21.95 MiB/s [2024-10-15T10:58:31.779Z] 5581.25 IOPS, 21.80 MiB/s [2024-10-15T10:58:32.717Z] 5585.20 IOPS, 21.82 MiB/s [2024-10-15T10:58:33.654Z] 5620.83 IOPS, 21.96 MiB/s [2024-10-15T10:58:34.591Z] 5628.86 IOPS, 21.99 MiB/s [2024-10-15T10:58:35.528Z] 5624.75 IOPS, 21.97 MiB/s [2024-10-15T10:58:36.905Z] 5629.44 IOPS, 21.99 MiB/s [2024-10-15T10:58:36.905Z] 5636.40 IOPS, 22.02 MiB/s 00:18:16.586 Latency(us) 00:18:16.586 [2024-10-15T10:58:36.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.586 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.586 Verification LBA range: start 0x0 length 0x2000 00:18:16.586 TLSTESTn1 : 10.01 5642.18 22.04 0.00 0.00 22653.86 5180.46 27587.54 00:18:16.586 [2024-10-15T10:58:36.905Z] =================================================================================================================== 00:18:16.586 [2024-10-15T10:58:36.905Z] Total : 5642.18 22.04 0.00 0.00 22653.86 5180.46 27587.54 00:18:16.586 { 00:18:16.586 "results": [ 00:18:16.586 { 00:18:16.586 "job": "TLSTESTn1", 00:18:16.586 "core_mask": "0x4", 00:18:16.586 "workload": "verify", 00:18:16.586 "status": "finished", 00:18:16.586 "verify_range": { 00:18:16.586 "start": 0, 00:18:16.586 "length": 8192 00:18:16.586 }, 00:18:16.586 "queue_depth": 128, 00:18:16.586 "io_size": 4096, 00:18:16.586 "runtime": 10.012436, 00:18:16.586 "iops": 5642.183380747702, 00:18:16.586 "mibps": 22.039778831045712, 00:18:16.586 "io_failed": 0, 00:18:16.586 "io_timeout": 0, 00:18:16.586 "avg_latency_us": 22653.85853997026, 00:18:16.586 "min_latency_us": 5180.464761904762, 00:18:16.586 "max_latency_us": 27587.53523809524 00:18:16.586 } 00:18:16.586 ], 00:18:16.586 "core_count": 1 00:18:16.586 } 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1232335 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1232335 ']' 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1232335 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1232335 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1232335' 00:18:16.586 killing process with pid 1232335 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1232335 00:18:16.586 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.586 00:18:16.586 Latency(us) 00:18:16.586 [2024-10-15T10:58:36.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.586 [2024-10-15T10:58:36.905Z] =================================================================================================================== 00:18:16.586 [2024-10-15T10:58:36.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1232335 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eivpGCQbYX 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eivpGCQbYX 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eivpGCQbYX 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eivpGCQbYX 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1234154 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1234154 /var/tmp/bdevperf.sock 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234154 ']' 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.586 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.586 [2024-10-15 12:58:36.802043] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:16.586 [2024-10-15 12:58:36.802091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234154 ] 00:18:16.586 [2024-10-15 12:58:36.861638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.586 [2024-10-15 12:58:36.898059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.845 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.845 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:16.845 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eivpGCQbYX 00:18:17.104 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.104 [2024-10-15 12:58:37.388119] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.104 [2024-10-15 12:58:37.398255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.104 [2024-10-15 12:58:37.398429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0230 (107): Transport endpoint is not connected 00:18:17.104 [2024-10-15 12:58:37.399423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0230 (9): Bad file descriptor 00:18:17.104 [2024-10-15 12:58:37.400424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.104 [2024-10-15 12:58:37.400434] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.104 [2024-10-15 12:58:37.400442] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:17.104 [2024-10-15 12:58:37.400455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.104 request: 00:18:17.104 { 00:18:17.104 "name": "TLSTEST", 00:18:17.104 "trtype": "tcp", 00:18:17.104 "traddr": "10.0.0.2", 00:18:17.104 "adrfam": "ipv4", 00:18:17.104 "trsvcid": "4420", 00:18:17.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.104 "prchk_reftag": false, 00:18:17.104 "prchk_guard": false, 00:18:17.104 "hdgst": false, 00:18:17.104 "ddgst": false, 00:18:17.104 "psk": "key0", 00:18:17.104 "allow_unrecognized_csi": false, 00:18:17.104 "method": "bdev_nvme_attach_controller", 00:18:17.104 "req_id": 1 00:18:17.104 } 00:18:17.104 Got JSON-RPC error response 00:18:17.104 response: 00:18:17.104 { 00:18:17.104 "code": -5, 00:18:17.104 "message": "Input/output error" 00:18:17.104 } 00:18:17.363 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1234154 00:18:17.363 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234154 ']' 00:18:17.363 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234154 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234154 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234154' 00:18:17.364 killing process with pid 1234154 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234154 00:18:17.364 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.364 00:18:17.364 Latency(us) 00:18:17.364 [2024-10-15T10:58:37.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.364 [2024-10-15T10:58:37.683Z] =================================================================================================================== 00:18:17.364 [2024-10-15T10:58:37.683Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234154 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3jyfDdop48 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3jyfDdop48 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3jyfDdop48 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3jyfDdop48 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1234385 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1234385 /var/tmp/bdevperf.sock 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234385 ']' 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.364 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.364 [2024-10-15 12:58:37.681108] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:17.364 [2024-10-15 12:58:37.681159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234385 ] 00:18:17.623 [2024-10-15 12:58:37.743070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.624 [2024-10-15 12:58:37.779914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.624 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.624 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:17.624 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3jyfDdop48 00:18:17.882 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:18.142 [2024-10-15 12:58:38.233779] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.142 [2024-10-15 12:58:38.240352] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.142 [2024-10-15 12:58:38.240375] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.142 [2024-10-15 12:58:38.240399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.142 [2024-10-15 12:58:38.241108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a230 (107): Transport endpoint is not connected 00:18:18.142 [2024-10-15 12:58:38.242102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a230 (9): Bad file descriptor 00:18:18.142 [2024-10-15 12:58:38.243104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.142 [2024-10-15 12:58:38.243117] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.142 [2024-10-15 12:58:38.243127] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:18.142 [2024-10-15 12:58:38.243136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.142 request: 00:18:18.142 { 00:18:18.142 "name": "TLSTEST", 00:18:18.142 "trtype": "tcp", 00:18:18.142 "traddr": "10.0.0.2", 00:18:18.142 "adrfam": "ipv4", 00:18:18.142 "trsvcid": "4420", 00:18:18.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.142 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:18.142 "prchk_reftag": false, 00:18:18.142 "prchk_guard": false, 00:18:18.142 "hdgst": false, 00:18:18.142 "ddgst": false, 00:18:18.142 "psk": "key0", 00:18:18.142 "allow_unrecognized_csi": false, 00:18:18.142 "method": "bdev_nvme_attach_controller", 00:18:18.142 "req_id": 1 00:18:18.142 } 00:18:18.142 Got JSON-RPC error response 00:18:18.142 response: 00:18:18.142 { 00:18:18.142 "code": -5, 00:18:18.142 "message": "Input/output error" 00:18:18.142 } 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1234385 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234385 ']' 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234385 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234385 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234385' 00:18:18.142 killing process with pid 1234385 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234385 00:18:18.142 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.142 00:18:18.142 Latency(us) 00:18:18.142 [2024-10-15T10:58:38.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.142 [2024-10-15T10:58:38.461Z] =================================================================================================================== 00:18:18.142 [2024-10-15T10:58:38.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.142 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234385 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3jyfDdop48 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3jyfDdop48 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3jyfDdop48 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3jyfDdop48 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1234436 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1234436 /var/tmp/bdevperf.sock 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234436 ']' 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.402 [2024-10-15 12:58:38.506537] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:18.402 [2024-10-15 12:58:38.506586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234436 ] 00:18:18.402 [2024-10-15 12:58:38.575772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.402 [2024-10-15 12:58:38.614023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:18.402 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3jyfDdop48 00:18:18.661 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.920 [2024-10-15 12:58:39.079486] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.920 [2024-10-15 12:58:39.084205] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.920 [2024-10-15 12:58:39.084228] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.920 [2024-10-15 12:58:39.084250] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.920 [2024-10-15 12:58:39.084900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190c230 (107): Transport endpoint is not connected 00:18:18.920 [2024-10-15 12:58:39.085891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190c230 (9): Bad file descriptor 00:18:18.920 [2024-10-15 12:58:39.086892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:18.920 [2024-10-15 12:58:39.086901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.920 [2024-10-15 12:58:39.086908] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:18.920 [2024-10-15 12:58:39.086918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:18.920 request: 00:18:18.920 { 00:18:18.920 "name": "TLSTEST", 00:18:18.920 "trtype": "tcp", 00:18:18.920 "traddr": "10.0.0.2", 00:18:18.920 "adrfam": "ipv4", 00:18:18.920 "trsvcid": "4420", 00:18:18.920 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:18.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.920 "prchk_reftag": false, 00:18:18.920 "prchk_guard": false, 00:18:18.920 "hdgst": false, 00:18:18.920 "ddgst": false, 00:18:18.920 "psk": "key0", 00:18:18.921 "allow_unrecognized_csi": false, 00:18:18.921 "method": "bdev_nvme_attach_controller", 00:18:18.921 "req_id": 1 00:18:18.921 } 00:18:18.921 Got JSON-RPC error response 00:18:18.921 response: 00:18:18.921 { 00:18:18.921 "code": -5, 00:18:18.921 "message": "Input/output error" 00:18:18.921 } 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1234436 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234436 ']' 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234436 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234436 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234436' 00:18:18.921 killing process with pid 1234436 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234436 00:18:18.921 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.921 00:18:18.921 Latency(us) 00:18:18.921 [2024-10-15T10:58:39.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.921 [2024-10-15T10:58:39.240Z] =================================================================================================================== 00:18:18.921 [2024-10-15T10:58:39.240Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.921 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234436 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:19.180 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1234639 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1234639 /var/tmp/bdevperf.sock 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234639 ']' 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.181 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.181 [2024-10-15 12:58:39.360711] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:19.181 [2024-10-15 12:58:39.360762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234639 ] 00:18:19.181 [2024-10-15 12:58:39.428920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.181 [2024-10-15 12:58:39.465364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.440 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.440 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:19.440 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:19.440 [2024-10-15 12:58:39.722419] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:19.440 [2024-10-15 12:58:39.722452] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:19.440 request: 00:18:19.440 { 00:18:19.440 "name": "key0", 00:18:19.440 "path": "", 00:18:19.440 "method": "keyring_file_add_key", 00:18:19.440 "req_id": 1 00:18:19.440 } 00:18:19.440 Got JSON-RPC error response 00:18:19.440 response: 00:18:19.440 { 00:18:19.440 "code": -1, 00:18:19.440 "message": "Operation not permitted" 00:18:19.440 } 00:18:19.440 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.699 [2024-10-15 12:58:39.927029] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.699 [2024-10-15 12:58:39.927054] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:19.699 request: 00:18:19.699 { 00:18:19.699 "name": "TLSTEST", 00:18:19.699 "trtype": "tcp", 00:18:19.699 "traddr": "10.0.0.2", 00:18:19.699 "adrfam": "ipv4", 00:18:19.699 "trsvcid": "4420", 00:18:19.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.699 "prchk_reftag": false, 00:18:19.699 "prchk_guard": false, 00:18:19.699 "hdgst": false, 00:18:19.699 "ddgst": false, 00:18:19.699 "psk": "key0", 00:18:19.699 "allow_unrecognized_csi": false, 00:18:19.699 "method": "bdev_nvme_attach_controller", 00:18:19.699 "req_id": 1 00:18:19.699 } 00:18:19.699 Got JSON-RPC error response 00:18:19.699 response: 00:18:19.699 { 00:18:19.699 "code": -126, 00:18:19.699 "message": "Required key not available" 00:18:19.699 } 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1234639 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234639 ']' 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234639 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.699 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234639 00:18:19.699 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:19.699 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:19.699 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234639' 00:18:19.699 killing process with pid 1234639 00:18:19.699 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234639 00:18:19.699 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.699 00:18:19.699 Latency(us) 00:18:19.699 [2024-10-15T10:58:40.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.699 [2024-10-15T10:58:40.018Z] =================================================================================================================== 00:18:19.699 [2024-10-15T10:58:40.018Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.699 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234639 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1229969 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1229969 ']' 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1229969 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229969 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229969' 00:18:19.959 killing process with pid 1229969 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1229969 00:18:19.959 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1229969 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.yPNDaNbpPe 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.yPNDaNbpPe 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1234882 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1234882 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234882 ']' 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.219 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.219 [2024-10-15 12:58:40.487540] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:20.219 [2024-10-15 12:58:40.487592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.479 [2024-10-15 12:58:40.560100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.479 [2024-10-15 12:58:40.597954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.479 [2024-10-15 12:58:40.597989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.479 [2024-10-15 12:58:40.597997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.479 [2024-10-15 12:58:40.598002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.479 [2024-10-15 12:58:40.598007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.479 [2024-10-15 12:58:40.598467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yPNDaNbpPe 00:18:20.479 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.738 [2024-10-15 12:58:40.896127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.738 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.997 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.997 [2024-10-15 12:58:41.277113] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.997 [2024-10-15 12:58:41.277318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.997 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.256 malloc0 00:18:21.256 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.515 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:21.773 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPNDaNbpPe 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yPNDaNbpPe 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1235140 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1235140 /var/tmp/bdevperf.sock 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1235140 ']' 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.033 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.033 [2024-10-15 12:58:42.146425] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:22.033 [2024-10-15 12:58:42.146472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235140 ] 00:18:22.033 [2024-10-15 12:58:42.212900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.033 [2024-10-15 12:58:42.252767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.034 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.034 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:22.034 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:22.293 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.552 [2024-10-15 12:58:42.715015] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.552 TLSTESTn1 00:18:22.552 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.811 Running I/O for 10 seconds... 00:18:24.686 5455.00 IOPS, 21.31 MiB/s [2024-10-15T10:58:45.942Z] 5521.50 IOPS, 21.57 MiB/s [2024-10-15T10:58:47.320Z] 5535.67 IOPS, 21.62 MiB/s [2024-10-15T10:58:48.256Z] 5556.00 IOPS, 21.70 MiB/s [2024-10-15T10:58:49.192Z] 5555.40 IOPS, 21.70 MiB/s [2024-10-15T10:58:50.129Z] 5543.83 IOPS, 21.66 MiB/s [2024-10-15T10:58:51.064Z] 5552.00 IOPS, 21.69 MiB/s [2024-10-15T10:58:52.001Z] 5532.38 IOPS, 21.61 MiB/s [2024-10-15T10:58:52.948Z] 5537.00 IOPS, 21.63 MiB/s [2024-10-15T10:58:52.948Z] 5530.20 IOPS, 21.60 MiB/s 00:18:32.629 Latency(us) 00:18:32.629 [2024-10-15T10:58:52.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.629 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:32.629 Verification LBA range: start 0x0 length 0x2000 00:18:32.629 TLSTESTn1 : 10.02 5532.11 21.61 0.00 0.00 23102.39 4837.18 21845.33 00:18:32.629 [2024-10-15T10:58:52.948Z] =================================================================================================================== 00:18:32.629 [2024-10-15T10:58:52.948Z] Total : 5532.11 21.61 0.00 0.00 23102.39 4837.18 21845.33 00:18:32.629 { 00:18:32.629 "results": [ 00:18:32.629 { 00:18:32.629 "job": "TLSTESTn1", 00:18:32.629 "core_mask": "0x4", 00:18:32.629 "workload": "verify", 00:18:32.629 "status": "finished", 00:18:32.629 "verify_range": { 00:18:32.629 "start": 0, 00:18:32.629 "length": 8192 00:18:32.629 }, 00:18:32.629 "queue_depth": 128, 00:18:32.629 "io_size": 4096, 00:18:32.629 "runtime": 10.019332, 00:18:32.629 "iops": 5532.105333968372, 00:18:32.629 "mibps": 21.609786460813954, 00:18:32.629 "io_failed": 0, 00:18:32.629 "io_timeout": 0, 00:18:32.629 "avg_latency_us": 23102.388986828042, 00:18:32.629 "min_latency_us": 4837.1809523809525, 00:18:32.629 "max_latency_us": 21845.333333333332 00:18:32.629 } 00:18:32.629 ], 00:18:32.629 "core_count": 1 00:18:32.629 } 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1235140 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1235140 ']' 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1235140 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.889 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1235140 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1235140' 00:18:32.889 killing process with pid 1235140 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1235140 00:18:32.889 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.889 00:18:32.889 Latency(us) 00:18:32.889 [2024-10-15T10:58:53.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.889 [2024-10-15T10:58:53.208Z] =================================================================================================================== 00:18:32.889 [2024-10-15T10:58:53.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1235140 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.yPNDaNbpPe 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPNDaNbpPe 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPNDaNbpPe 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPNDaNbpPe 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yPNDaNbpPe 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1236978 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1236978 /var/tmp/bdevperf.sock 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1236978 ']' 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.889 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.148 [2024-10-15 12:58:53.232483] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:33.148 [2024-10-15 12:58:53.232537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236978 ] 00:18:33.148 [2024-10-15 12:58:53.295957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.148 [2024-10-15 12:58:53.337371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.148 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.149 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:33.149 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:33.408 [2024-10-15 12:58:53.586405] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yPNDaNbpPe': 0100666 00:18:33.408 [2024-10-15 12:58:53.586437] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:33.408 request: 00:18:33.408 { 00:18:33.408 "name": "key0", 00:18:33.408 "path": "/tmp/tmp.yPNDaNbpPe", 00:18:33.408 "method": "keyring_file_add_key", 00:18:33.408 "req_id": 1 00:18:33.408 } 00:18:33.408 Got JSON-RPC error response 00:18:33.408 response: 00:18:33.408 { 00:18:33.408 "code": -1, 00:18:33.408 "message": "Operation not permitted" 00:18:33.408 } 00:18:33.408 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.668 [2024-10-15 12:58:53.778985] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.668 [2024-10-15 12:58:53.779010] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:33.668 request: 00:18:33.668 { 00:18:33.668 "name": "TLSTEST", 00:18:33.668 "trtype": "tcp", 00:18:33.668 "traddr": "10.0.0.2", 00:18:33.668 "adrfam": "ipv4", 00:18:33.668 "trsvcid": "4420", 00:18:33.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.668 "prchk_reftag": false, 00:18:33.668 "prchk_guard": false, 00:18:33.668 "hdgst": false, 00:18:33.668 "ddgst": false, 00:18:33.668 "psk": "key0", 00:18:33.668 "allow_unrecognized_csi": false, 00:18:33.668 "method": "bdev_nvme_attach_controller", 00:18:33.668 "req_id": 1 00:18:33.668 } 00:18:33.668 Got JSON-RPC error response 00:18:33.668 response: 00:18:33.668 { 00:18:33.668 "code": -126, 00:18:33.668 "message": "Required key not available" 00:18:33.668 } 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1236978 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1236978 ']' 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1236978 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236978 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236978' 00:18:33.668 killing process with pid 1236978 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1236978 00:18:33.668 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.668 00:18:33.668 Latency(us) 00:18:33.668 [2024-10-15T10:58:53.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.668 [2024-10-15T10:58:53.987Z] =================================================================================================================== 00:18:33.668 [2024-10-15T10:58:53.987Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.668 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1236978 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234882 ']' 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234882' 00:18:33.928 killing process with pid 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234882 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1237130 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1237130 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1237130 ']' 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.928 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.187 [2024-10-15 12:58:54.287729] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:34.187 [2024-10-15 12:58:54.287777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.187 [2024-10-15 12:58:54.358911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.187 [2024-10-15 12:58:54.399158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.187 [2024-10-15 12:58:54.399194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.187 [2024-10-15 12:58:54.399202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.187 [2024-10-15 12:58:54.399208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.187 [2024-10-15 12:58:54.399232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.187 [2024-10-15 12:58:54.399808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.187 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.187 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:34.187 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:34.187 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.187 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yPNDaNbpPe 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.446 [2024-10-15 12:58:54.698624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.446 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.704 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.964 [2024-10-15 12:58:55.087612] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.964 [2024-10-15 12:58:55.087843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.964 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.964 malloc0 00:18:35.222 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.222 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:35.481 [2024-10-15 12:58:55.656919] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yPNDaNbpPe': 0100666 00:18:35.481 [2024-10-15 12:58:55.656947] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:35.481 request: 00:18:35.481 { 00:18:35.481 "name": "key0", 00:18:35.481 "path": "/tmp/tmp.yPNDaNbpPe", 00:18:35.481 "method": "keyring_file_add_key", 00:18:35.481 "req_id": 1 00:18:35.481 } 00:18:35.481 Got JSON-RPC error response 00:18:35.481 response: 00:18:35.481 { 00:18:35.481 "code": -1, 00:18:35.481 "message": "Operation not permitted" 00:18:35.481 } 00:18:35.481 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.740 [2024-10-15 12:58:55.849435] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:35.740 [2024-10-15 12:58:55.849467] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:35.740 request: 00:18:35.740 { 00:18:35.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.740 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.740 "psk": "key0", 00:18:35.740 "method": "nvmf_subsystem_add_host", 00:18:35.740 "req_id": 1 00:18:35.740 } 00:18:35.740 Got JSON-RPC error response 00:18:35.740 response: 00:18:35.740 { 00:18:35.740 "code": -32603, 00:18:35.740 "message": "Internal error" 00:18:35.740 } 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1237130 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1237130 ']' 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1237130 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237130 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237130' 00:18:35.740 killing process with pid 1237130 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1237130 00:18:35.740 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1237130 00:18:35.999 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.yPNDaNbpPe 00:18:35.999 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:35.999 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:35.999 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1237479 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1237479 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1237479 ']' 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.000 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 [2024-10-15 12:58:56.150084] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:36.000 [2024-10-15 12:58:56.150130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.000 [2024-10-15 12:58:56.222488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.000 [2024-10-15 12:58:56.261251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.000 [2024-10-15 12:58:56.261279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.000 [2024-10-15 12:58:56.261286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.000 [2024-10-15 12:58:56.261292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.000 [2024-10-15 12:58:56.261297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.000 [2024-10-15 12:58:56.261754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.258 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:36.259 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yPNDaNbpPe 00:18:36.259 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.259 [2024-10-15 12:58:56.564368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:36.776 [2024-10-15 12:58:56.949367] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.776 [2024-10-15 12:58:56.949584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.776 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.035 malloc0 00:18:37.035 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.294 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:37.294 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1237740 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1237740 /var/tmp/bdevperf.sock 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1237740 ']' 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.552 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.552 [2024-10-15 12:58:57.786340] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:37.552 [2024-10-15 12:58:57.786388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237740 ] 00:18:37.552 [2024-10-15 12:58:57.854800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.811 [2024-10-15 12:58:57.895574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.811 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.811 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:37.811 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:38.070 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.070 [2024-10-15 12:58:58.353883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.328 TLSTESTn1 00:18:38.328 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:38.588 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:38.588 "subsystems": [ 00:18:38.588 { 00:18:38.588 "subsystem": "keyring", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "keyring_file_add_key", 00:18:38.588 "params": { 00:18:38.588 "name": "key0", 00:18:38.588 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:38.588 } 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "iobuf", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "iobuf_set_options", 00:18:38.588 "params": { 00:18:38.588 "small_pool_count": 8192, 00:18:38.588 "large_pool_count": 1024, 00:18:38.588 "small_bufsize": 8192, 00:18:38.588 "large_bufsize": 135168 00:18:38.588 } 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "sock", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "sock_set_default_impl", 00:18:38.588 "params": { 00:18:38.588 "impl_name": "posix" 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "sock_impl_set_options", 00:18:38.588 "params": { 00:18:38.588 "impl_name": "ssl", 00:18:38.588 "recv_buf_size": 4096, 00:18:38.588 "send_buf_size": 4096, 00:18:38.588 "enable_recv_pipe": true, 00:18:38.588 "enable_quickack": false, 00:18:38.588 "enable_placement_id": 0, 00:18:38.588 "enable_zerocopy_send_server": true, 00:18:38.588 "enable_zerocopy_send_client": false, 00:18:38.588 "zerocopy_threshold": 0, 00:18:38.588 "tls_version": 0, 00:18:38.588 "enable_ktls": false 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "sock_impl_set_options", 00:18:38.588 "params": { 00:18:38.588 "impl_name": "posix", 00:18:38.588 "recv_buf_size": 2097152, 00:18:38.588 "send_buf_size": 2097152, 00:18:38.588 "enable_recv_pipe": true, 00:18:38.588 "enable_quickack": false, 00:18:38.588 "enable_placement_id": 0, 00:18:38.588 "enable_zerocopy_send_server": true, 00:18:38.588 "enable_zerocopy_send_client": false, 00:18:38.588 "zerocopy_threshold": 0, 00:18:38.588 "tls_version": 0, 00:18:38.588 "enable_ktls": false 00:18:38.588 } 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "vmd", 00:18:38.588 "config": [] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "accel", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "accel_set_options", 00:18:38.588 "params": { 00:18:38.588 "small_cache_size": 128, 00:18:38.588 "large_cache_size": 16, 00:18:38.588 "task_count": 2048, 00:18:38.588 "sequence_count": 2048, 00:18:38.588 "buf_count": 2048 00:18:38.588 } 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "bdev", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "bdev_set_options", 00:18:38.588 "params": { 00:18:38.588 "bdev_io_pool_size": 65535, 00:18:38.588 "bdev_io_cache_size": 256, 00:18:38.588 "bdev_auto_examine": true, 00:18:38.588 "iobuf_small_cache_size": 128, 00:18:38.588 "iobuf_large_cache_size": 16 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_raid_set_options", 00:18:38.588 "params": { 00:18:38.588 "process_window_size_kb": 1024, 00:18:38.588 "process_max_bandwidth_mb_sec": 0 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_iscsi_set_options", 00:18:38.588 "params": { 00:18:38.588 "timeout_sec": 30 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_nvme_set_options", 00:18:38.588 "params": { 00:18:38.588 "action_on_timeout": "none", 00:18:38.588 "timeout_us": 0, 00:18:38.588 "timeout_admin_us": 0, 00:18:38.588 "keep_alive_timeout_ms": 10000, 00:18:38.588 "arbitration_burst": 0, 00:18:38.588 "low_priority_weight": 0, 00:18:38.588 "medium_priority_weight": 0, 00:18:38.588 "high_priority_weight": 0, 00:18:38.588 "nvme_adminq_poll_period_us": 10000, 00:18:38.588 "nvme_ioq_poll_period_us": 0, 00:18:38.588 "io_queue_requests": 0, 00:18:38.588 "delay_cmd_submit": true, 00:18:38.588 "transport_retry_count": 4, 00:18:38.588 "bdev_retry_count": 3, 00:18:38.588 "transport_ack_timeout": 0, 00:18:38.588 "ctrlr_loss_timeout_sec": 0, 00:18:38.588 "reconnect_delay_sec": 0, 00:18:38.588 "fast_io_fail_timeout_sec": 0, 00:18:38.588 "disable_auto_failback": false, 00:18:38.588 "generate_uuids": false, 00:18:38.588 "transport_tos": 0, 00:18:38.588 "nvme_error_stat": false, 00:18:38.588 "rdma_srq_size": 0, 00:18:38.588 "io_path_stat": false, 00:18:38.588 "allow_accel_sequence": false, 00:18:38.588 "rdma_max_cq_size": 0, 00:18:38.588 "rdma_cm_event_timeout_ms": 0, 00:18:38.588 "dhchap_digests": [ 00:18:38.588 "sha256", 00:18:38.588 "sha384", 00:18:38.588 "sha512" 00:18:38.588 ], 00:18:38.588 "dhchap_dhgroups": [ 00:18:38.588 "null", 00:18:38.588 "ffdhe2048", 00:18:38.588 "ffdhe3072", 00:18:38.588 "ffdhe4096", 00:18:38.588 "ffdhe6144", 00:18:38.588 "ffdhe8192" 00:18:38.588 ] 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_nvme_set_hotplug", 00:18:38.588 "params": { 00:18:38.588 "period_us": 100000, 00:18:38.588 "enable": false 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_malloc_create", 00:18:38.588 "params": { 00:18:38.588 "name": "malloc0", 00:18:38.588 "num_blocks": 8192, 00:18:38.588 "block_size": 4096, 00:18:38.588 "physical_block_size": 4096, 00:18:38.588 "uuid": "d4bd94f6-db0e-4e3b-bb1b-3c1e1e662819", 00:18:38.588 "optimal_io_boundary": 0, 00:18:38.588 "md_size": 0, 00:18:38.588 "dif_type": 0, 00:18:38.588 "dif_is_head_of_md": false, 00:18:38.588 "dif_pi_format": 0 00:18:38.588 } 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "method": "bdev_wait_for_examine" 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "nbd", 00:18:38.588 "config": [] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "scheduler", 00:18:38.588 "config": [ 00:18:38.588 { 00:18:38.588 "method": "framework_set_scheduler", 00:18:38.588 "params": { 00:18:38.588 "name": "static" 00:18:38.588 } 00:18:38.588 } 00:18:38.588 ] 00:18:38.588 }, 00:18:38.588 { 00:18:38.588 "subsystem": "nvmf", 00:18:38.588 "config": [ 00:18:38.589 { 00:18:38.589 "method": "nvmf_set_config", 00:18:38.589 "params": { 00:18:38.589 "discovery_filter": "match_any", 00:18:38.589 "admin_cmd_passthru": { 00:18:38.589 "identify_ctrlr": false 00:18:38.589 }, 00:18:38.589 "dhchap_digests": [ 00:18:38.589 "sha256", 00:18:38.589 "sha384", 00:18:38.589 "sha512" 00:18:38.589 ], 00:18:38.589 "dhchap_dhgroups": [ 00:18:38.589 "null", 00:18:38.589 "ffdhe2048", 00:18:38.589 "ffdhe3072", 00:18:38.589 "ffdhe4096", 00:18:38.589 "ffdhe6144", 00:18:38.589 "ffdhe8192" 00:18:38.589 ] 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_set_max_subsystems", 00:18:38.589 "params": { 00:18:38.589 "max_subsystems": 1024 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_set_crdt", 00:18:38.589 "params": { 00:18:38.589 "crdt1": 0, 00:18:38.589 "crdt2": 0, 00:18:38.589 "crdt3": 0 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_create_transport", 00:18:38.589 "params": { 00:18:38.589 "trtype": "TCP", 00:18:38.589 "max_queue_depth": 128, 00:18:38.589 "max_io_qpairs_per_ctrlr": 127, 00:18:38.589 "in_capsule_data_size": 4096, 00:18:38.589 "max_io_size": 131072, 00:18:38.589 "io_unit_size": 131072, 00:18:38.589 "max_aq_depth": 128, 00:18:38.589 "num_shared_buffers": 511, 00:18:38.589 "buf_cache_size": 4294967295, 00:18:38.589 "dif_insert_or_strip": false, 00:18:38.589 "zcopy": false, 00:18:38.589 "c2h_success": false, 00:18:38.589 "sock_priority": 0, 00:18:38.589 "abort_timeout_sec": 1, 00:18:38.589 "ack_timeout": 0, 00:18:38.589 "data_wr_pool_size": 0 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_create_subsystem", 00:18:38.589 "params": { 00:18:38.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.589 "allow_any_host": false, 00:18:38.589 "serial_number": "SPDK00000000000001", 00:18:38.589 "model_number": "SPDK bdev Controller", 00:18:38.589 "max_namespaces": 10, 00:18:38.589 "min_cntlid": 1, 00:18:38.589 "max_cntlid": 65519, 00:18:38.589 "ana_reporting": false 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_subsystem_add_host", 00:18:38.589 "params": { 00:18:38.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.589 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.589 "psk": "key0" 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_subsystem_add_ns", 00:18:38.589 "params": { 00:18:38.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.589 "namespace": { 00:18:38.589 "nsid": 1, 00:18:38.589 "bdev_name": "malloc0", 00:18:38.589 "nguid": "D4BD94F6DB0E4E3BBB1B3C1E1E662819", 00:18:38.589 "uuid": "d4bd94f6-db0e-4e3b-bb1b-3c1e1e662819", 00:18:38.589 "no_auto_visible": false 00:18:38.589 } 00:18:38.589 } 00:18:38.589 }, 00:18:38.589 { 00:18:38.589 "method": "nvmf_subsystem_add_listener", 00:18:38.589 "params": { 00:18:38.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.589 "listen_address": { 00:18:38.589 "trtype": "TCP", 00:18:38.589 "adrfam": "IPv4", 00:18:38.589 "traddr": "10.0.0.2", 00:18:38.589 "trsvcid": "4420" 00:18:38.589 }, 00:18:38.589 "secure_channel": true 00:18:38.589 } 00:18:38.589 } 00:18:38.589 ] 00:18:38.589 } 00:18:38.589 ] 00:18:38.589 }' 00:18:38.589 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:38.848 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:38.848 "subsystems": [ 00:18:38.848 { 00:18:38.848 "subsystem": "keyring", 00:18:38.848 "config": [ 00:18:38.848 { 00:18:38.848 "method": "keyring_file_add_key", 00:18:38.848 "params": { 00:18:38.848 "name": "key0", 00:18:38.848 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:38.848 } 00:18:38.848 } 00:18:38.848 ] 00:18:38.848 }, 00:18:38.848 { 00:18:38.848 "subsystem": "iobuf", 00:18:38.849 "config": [ 00:18:38.849 { 00:18:38.849 "method": "iobuf_set_options", 00:18:38.849 "params": { 00:18:38.849 "small_pool_count": 8192, 00:18:38.849 "large_pool_count": 1024, 00:18:38.849 "small_bufsize": 8192, 00:18:38.849 "large_bufsize": 135168 00:18:38.849 } 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "subsystem": "sock", 00:18:38.849 "config": [ 00:18:38.849 { 00:18:38.849 "method": "sock_set_default_impl", 00:18:38.849 "params": { 00:18:38.849 "impl_name": "posix" 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "sock_impl_set_options", 00:18:38.849 "params": { 00:18:38.849 "impl_name": "ssl", 00:18:38.849 "recv_buf_size": 4096, 00:18:38.849 "send_buf_size": 4096, 00:18:38.849 "enable_recv_pipe": true, 00:18:38.849 "enable_quickack": false, 00:18:38.849 "enable_placement_id": 0, 00:18:38.849 "enable_zerocopy_send_server": true, 00:18:38.849 "enable_zerocopy_send_client": false, 00:18:38.849 "zerocopy_threshold": 0, 00:18:38.849 "tls_version": 0, 00:18:38.849 "enable_ktls": false 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "sock_impl_set_options", 00:18:38.849 "params": { 00:18:38.849 "impl_name": "posix", 00:18:38.849 "recv_buf_size": 2097152, 00:18:38.849 "send_buf_size": 2097152, 00:18:38.849 "enable_recv_pipe": true, 00:18:38.849 "enable_quickack": false, 00:18:38.849 "enable_placement_id": 0, 00:18:38.849 "enable_zerocopy_send_server": true, 00:18:38.849 "enable_zerocopy_send_client": false, 00:18:38.849 "zerocopy_threshold": 0, 00:18:38.849 "tls_version": 0, 00:18:38.849 "enable_ktls": false 00:18:38.849 } 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "subsystem": "vmd", 00:18:38.849 "config": [] 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "subsystem": "accel", 00:18:38.849 "config": [ 00:18:38.849 { 00:18:38.849 "method": "accel_set_options", 00:18:38.849 "params": { 00:18:38.849 "small_cache_size": 128, 00:18:38.849 "large_cache_size": 16, 00:18:38.849 "task_count": 2048, 00:18:38.849 "sequence_count": 2048, 00:18:38.849 "buf_count": 2048 00:18:38.849 } 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "subsystem": "bdev", 00:18:38.849 "config": [ 00:18:38.849 { 00:18:38.849 "method": "bdev_set_options", 00:18:38.849 "params": { 00:18:38.849 "bdev_io_pool_size": 65535, 00:18:38.849 "bdev_io_cache_size": 256, 00:18:38.849 "bdev_auto_examine": true, 00:18:38.849 "iobuf_small_cache_size": 128, 00:18:38.849 "iobuf_large_cache_size": 16 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_raid_set_options", 00:18:38.849 "params": { 00:18:38.849 "process_window_size_kb": 1024, 00:18:38.849 "process_max_bandwidth_mb_sec": 0 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_iscsi_set_options", 00:18:38.849 "params": { 00:18:38.849 "timeout_sec": 30 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_nvme_set_options", 00:18:38.849 "params": { 00:18:38.849 "action_on_timeout": "none", 00:18:38.849 "timeout_us": 0, 00:18:38.849 "timeout_admin_us": 0, 00:18:38.849 "keep_alive_timeout_ms": 10000, 00:18:38.849 "arbitration_burst": 0, 00:18:38.849 "low_priority_weight": 0, 00:18:38.849 "medium_priority_weight": 0, 00:18:38.849 "high_priority_weight": 0, 00:18:38.849 "nvme_adminq_poll_period_us": 10000, 00:18:38.849 "nvme_ioq_poll_period_us": 0, 00:18:38.849 "io_queue_requests": 512, 00:18:38.849 "delay_cmd_submit": true, 00:18:38.849 "transport_retry_count": 4, 00:18:38.849 "bdev_retry_count": 3, 00:18:38.849 "transport_ack_timeout": 0, 00:18:38.849 "ctrlr_loss_timeout_sec": 0, 00:18:38.849 "reconnect_delay_sec": 0, 00:18:38.849 "fast_io_fail_timeout_sec": 0, 00:18:38.849 "disable_auto_failback": false, 00:18:38.849 "generate_uuids": false, 00:18:38.849 "transport_tos": 0, 00:18:38.849 "nvme_error_stat": false, 00:18:38.849 "rdma_srq_size": 0, 00:18:38.849 "io_path_stat": false, 00:18:38.849 "allow_accel_sequence": false, 00:18:38.849 "rdma_max_cq_size": 0, 00:18:38.849 "rdma_cm_event_timeout_ms": 0, 00:18:38.849 "dhchap_digests": [ 00:18:38.849 "sha256", 00:18:38.849 "sha384", 00:18:38.849 "sha512" 00:18:38.849 ], 00:18:38.849 "dhchap_dhgroups": [ 00:18:38.849 "null", 00:18:38.849 "ffdhe2048", 00:18:38.849 "ffdhe3072", 00:18:38.849 "ffdhe4096", 00:18:38.849 "ffdhe6144", 00:18:38.849 "ffdhe8192" 00:18:38.849 ] 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_nvme_attach_controller", 00:18:38.849 "params": { 00:18:38.849 "name": "TLSTEST", 00:18:38.849 "trtype": "TCP", 00:18:38.849 "adrfam": "IPv4", 00:18:38.849 "traddr": "10.0.0.2", 00:18:38.849 "trsvcid": "4420", 00:18:38.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.849 "prchk_reftag": false, 00:18:38.849 "prchk_guard": false, 00:18:38.849 "ctrlr_loss_timeout_sec": 0, 00:18:38.849 "reconnect_delay_sec": 0, 00:18:38.849 "fast_io_fail_timeout_sec": 0, 00:18:38.849 "psk": "key0", 00:18:38.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.849 "hdgst": false, 00:18:38.849 "ddgst": false, 00:18:38.849 "multipath": "multipath" 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_nvme_set_hotplug", 00:18:38.849 "params": { 00:18:38.849 "period_us": 100000, 00:18:38.849 "enable": false 00:18:38.849 } 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "method": "bdev_wait_for_examine" 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "subsystem": "nbd", 00:18:38.849 "config": [] 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }' 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1237740 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1237740 ']' 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1237740 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.849 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237740 00:18:38.849 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:38.849 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:38.849 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237740' 00:18:38.849 killing process with pid 1237740 00:18:38.849 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1237740 00:18:38.849 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.849 00:18:38.849 Latency(us) 00:18:38.849 [2024-10-15T10:58:59.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.849 [2024-10-15T10:58:59.168Z] =================================================================================================================== 00:18:38.849 [2024-10-15T10:58:59.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.849 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1237740 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1237479 ']' 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237479' 00:18:39.109 killing process with pid 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1237479 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.109 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:39.109 "subsystems": [ 00:18:39.109 { 00:18:39.109 "subsystem": "keyring", 00:18:39.109 "config": [ 00:18:39.109 { 00:18:39.109 "method": "keyring_file_add_key", 00:18:39.109 "params": { 00:18:39.109 "name": "key0", 00:18:39.109 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:39.109 } 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "subsystem": "iobuf", 00:18:39.109 "config": [ 00:18:39.109 { 00:18:39.109 "method": "iobuf_set_options", 00:18:39.109 "params": { 00:18:39.109 "small_pool_count": 8192, 00:18:39.109 "large_pool_count": 1024, 00:18:39.109 "small_bufsize": 8192, 00:18:39.109 "large_bufsize": 135168 00:18:39.109 } 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "subsystem": "sock", 00:18:39.109 "config": [ 00:18:39.109 { 00:18:39.109 "method": "sock_set_default_impl", 00:18:39.109 "params": { 00:18:39.109 "impl_name": "posix" 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "method": "sock_impl_set_options", 00:18:39.109 "params": { 00:18:39.109 "impl_name": "ssl", 00:18:39.109 "recv_buf_size": 4096, 00:18:39.109 "send_buf_size": 4096, 00:18:39.109 "enable_recv_pipe": true, 00:18:39.109 "enable_quickack": false, 00:18:39.109 "enable_placement_id": 0, 00:18:39.109 "enable_zerocopy_send_server": true, 00:18:39.109 "enable_zerocopy_send_client": false, 00:18:39.109 "zerocopy_threshold": 0, 00:18:39.109 "tls_version": 0, 00:18:39.109 "enable_ktls": false 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "method": "sock_impl_set_options", 00:18:39.109 "params": { 00:18:39.109 "impl_name": "posix", 00:18:39.109 "recv_buf_size": 2097152, 00:18:39.109 "send_buf_size": 2097152, 00:18:39.109 "enable_recv_pipe": true, 00:18:39.109 "enable_quickack": false, 00:18:39.109 "enable_placement_id": 0, 00:18:39.109 "enable_zerocopy_send_server": true, 00:18:39.109 "enable_zerocopy_send_client": false, 00:18:39.109 "zerocopy_threshold": 0, 00:18:39.109 "tls_version": 0, 00:18:39.109 "enable_ktls": false 00:18:39.109 } 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "subsystem": "vmd", 00:18:39.109 "config": [] 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "subsystem": "accel", 00:18:39.109 "config": [ 00:18:39.109 { 00:18:39.109 "method": "accel_set_options", 00:18:39.109 "params": { 00:18:39.109 "small_cache_size": 128, 00:18:39.109 "large_cache_size": 16, 00:18:39.109 "task_count": 2048, 00:18:39.109 "sequence_count": 2048, 00:18:39.109 "buf_count": 2048 00:18:39.109 } 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "subsystem": "bdev", 00:18:39.109 "config": [ 00:18:39.109 { 00:18:39.109 "method": "bdev_set_options", 00:18:39.109 "params": { 00:18:39.109 "bdev_io_pool_size": 65535, 00:18:39.109 "bdev_io_cache_size": 256, 00:18:39.109 "bdev_auto_examine": true, 00:18:39.109 "iobuf_small_cache_size": 128, 00:18:39.109 "iobuf_large_cache_size": 16 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "method": "bdev_raid_set_options", 00:18:39.109 "params": { 00:18:39.109 "process_window_size_kb": 1024, 00:18:39.109 "process_max_bandwidth_mb_sec": 0 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "method": "bdev_iscsi_set_options", 00:18:39.109 "params": { 00:18:39.109 "timeout_sec": 30 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "method": "bdev_nvme_set_options", 00:18:39.109 "params": { 00:18:39.109 "action_on_timeout": "none", 00:18:39.109 "timeout_us": 0, 00:18:39.109 "timeout_admin_us": 0, 00:18:39.109 "keep_alive_timeout_ms": 10000, 00:18:39.109 "arbitration_burst": 0, 00:18:39.109 "low_priority_weight": 0, 00:18:39.109 "medium_priority_weight": 0, 00:18:39.109 "high_priority_weight": 0, 00:18:39.109 "nvme_adminq_poll_period_us": 10000, 00:18:39.109 "nvme_ioq_poll_period_us": 0, 00:18:39.109 "io_queue_requests": 0, 00:18:39.109 "delay_cmd_submit": true, 00:18:39.109 "transport_retry_count": 4, 00:18:39.109 "bdev_retry_count": 3, 00:18:39.109 "transport_ack_timeout": 0, 00:18:39.109 "ctrlr_loss_timeout_sec": 0, 00:18:39.109 "reconnect_delay_sec": 0, 00:18:39.109 "fast_io_fail_timeout_sec": 0, 00:18:39.109 "disable_auto_failback": false, 00:18:39.109 "generate_uuids": false, 00:18:39.109 "transport_tos": 0, 00:18:39.109 "nvme_error_stat": false, 00:18:39.109 "rdma_srq_size": 0, 00:18:39.109 "io_path_stat": false, 00:18:39.109 "allow_accel_sequence": false, 00:18:39.109 "rdma_max_cq_size": 0, 00:18:39.110 "rdma_cm_event_timeout_ms": 0, 00:18:39.110 "dhchap_digests": [ 00:18:39.110 "sha256", 00:18:39.110 "sha384", 00:18:39.110 "sha512" 00:18:39.110 ], 00:18:39.110 "dhchap_dhgroups": [ 00:18:39.110 "null", 00:18:39.110 "ffdhe2048", 00:18:39.110 "ffdhe3072", 00:18:39.110 "ffdhe4096", 00:18:39.110 "ffdhe6144", 00:18:39.110 "ffdhe8192" 00:18:39.110 ] 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "bdev_nvme_set_hotplug", 00:18:39.110 "params": { 00:18:39.110 "period_us": 100000, 00:18:39.110 "enable": false 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "bdev_malloc_create", 00:18:39.110 "params": { 00:18:39.110 "name": "malloc0", 00:18:39.110 "num_blocks": 8192, 00:18:39.110 "block_size": 4096, 00:18:39.110 "physical_block_size": 4096, 00:18:39.110 "uuid": "d4bd94f6-db0e-4e3b-bb1b-3c1e1e662819", 00:18:39.110 "optimal_io_boundary": 0, 00:18:39.110 "md_size": 0, 00:18:39.110 "dif_type": 0, 00:18:39.110 "dif_is_head_of_md": false, 00:18:39.110 "dif_pi_format": 0 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "bdev_wait_for_examine" 00:18:39.110 } 00:18:39.110 ] 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "subsystem": "nbd", 00:18:39.110 "config": [] 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "subsystem": "scheduler", 00:18:39.110 "config": [ 00:18:39.110 { 00:18:39.110 "method": "framework_set_scheduler", 00:18:39.110 "params": { 00:18:39.110 "name": "static" 00:18:39.110 } 00:18:39.110 } 00:18:39.110 ] 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "subsystem": "nvmf", 00:18:39.110 "config": [ 00:18:39.110 { 00:18:39.110 "method": "nvmf_set_config", 00:18:39.110 "params": { 00:18:39.110 "discovery_filter": "match_any", 00:18:39.110 "admin_cmd_passthru": { 00:18:39.110 "identify_ctrlr": false 00:18:39.110 }, 00:18:39.110 "dhchap_digests": [ 00:18:39.110 "sha256", 00:18:39.110 "sha384", 00:18:39.110 "sha512" 00:18:39.110 ], 00:18:39.110 "dhchap_dhgroups": [ 00:18:39.110 "null", 00:18:39.110 "ffdhe2048", 00:18:39.110 "ffdhe3072", 00:18:39.110 "ffdhe4096", 00:18:39.110 "ffdhe6144", 00:18:39.110 "ffdhe8192" 00:18:39.110 ] 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_set_max_subsystems", 00:18:39.110 "params": { 00:18:39.110 "max_subsystems": 1024 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_set_crdt", 00:18:39.110 "params": { 00:18:39.110 "crdt1": 0, 00:18:39.110 "crdt2": 0, 00:18:39.110 "crdt3": 0 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_create_transport", 00:18:39.110 "params": { 00:18:39.110 "trtype": "TCP", 00:18:39.110 "max_queue_depth": 128, 00:18:39.110 "max_io_qpairs_per_ctrlr": 127, 00:18:39.110 "in_capsule_data_size": 4096, 00:18:39.110 "max_io_size": 131072, 00:18:39.110 "io_unit_size": 131072, 00:18:39.110 "max_aq_depth": 128, 00:18:39.110 "num_shared_buffers": 511, 00:18:39.110 "buf_cache_size": 4294967295, 00:18:39.110 "dif_insert_or_strip": false, 00:18:39.110 "zcopy": false, 00:18:39.110 "c2h_success": false, 00:18:39.110 "sock_priority": 0, 00:18:39.110 "abort_timeout_sec": 1, 00:18:39.110 "ack_timeout": 0, 00:18:39.110 "data_wr_pool_size": 0 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_create_subsystem", 00:18:39.110 "params": { 00:18:39.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.110 "allow_any_host": false, 00:18:39.110 "serial_number": "SPDK00000000000001", 00:18:39.110 "model_number": "SPDK bdev Controller", 00:18:39.110 "max_namespaces": 10, 00:18:39.110 "min_cntlid": 1, 00:18:39.110 "max_cntlid": 65519, 00:18:39.110 "ana_reporting": false 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_subsystem_add_host", 00:18:39.110 "params": { 00:18:39.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.110 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.110 "psk": "key0" 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_subsystem_add_ns", 00:18:39.110 "params": { 00:18:39.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.110 "namespace": { 00:18:39.110 "nsid": 1, 00:18:39.110 "bdev_name": "malloc0", 00:18:39.110 "nguid": "D4BD94F6DB0E4E3BBB1B3C1E1E662819", 00:18:39.110 "uuid": "d4bd94f6-db0e-4e3b-bb1b-3c1e1e662819", 00:18:39.110 "no_auto_visible": false 00:18:39.110 } 00:18:39.110 } 00:18:39.110 }, 00:18:39.110 { 00:18:39.110 "method": "nvmf_subsystem_add_listener", 00:18:39.110 "params": { 00:18:39.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.110 "listen_address": { 00:18:39.110 "trtype": "TCP", 00:18:39.110 "adrfam": "IPv4", 00:18:39.110 "traddr": "10.0.0.2", 00:18:39.110 "trsvcid": "4420" 00:18:39.110 }, 00:18:39.110 "secure_channel": true 00:18:39.110 } 00:18:39.110 } 00:18:39.110 ] 00:18:39.110 } 00:18:39.110 ] 00:18:39.110 }' 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1237993 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1237993 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1237993 ']' 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.110 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 [2024-10-15 12:58:59.467086] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:39.368 [2024-10-15 12:58:59.467130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.368 [2024-10-15 12:58:59.535488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.368 [2024-10-15 12:58:59.575406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.368 [2024-10-15 12:58:59.575439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.368 [2024-10-15 12:58:59.575446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.368 [2024-10-15 12:58:59.575452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.368 [2024-10-15 12:58:59.575457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.368 [2024-10-15 12:58:59.576052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.626 [2024-10-15 12:58:59.789172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.626 [2024-10-15 12:58:59.821192] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.626 [2024-10-15 12:58:59.821393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.193 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.193 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:40.193 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:40.193 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.193 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1238236 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1238236 /var/tmp/bdevperf.sock 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1238236 ']' 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.194 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:40.194 "subsystems": [ 00:18:40.194 { 00:18:40.194 "subsystem": "keyring", 00:18:40.194 "config": [ 00:18:40.194 { 00:18:40.194 "method": "keyring_file_add_key", 00:18:40.194 "params": { 00:18:40.194 "name": "key0", 00:18:40.194 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:40.194 } 00:18:40.194 } 00:18:40.194 ] 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "subsystem": "iobuf", 00:18:40.194 "config": [ 00:18:40.194 { 00:18:40.194 "method": "iobuf_set_options", 00:18:40.194 "params": { 00:18:40.194 "small_pool_count": 8192, 00:18:40.194 "large_pool_count": 1024, 00:18:40.194 "small_bufsize": 8192, 00:18:40.194 "large_bufsize": 135168 00:18:40.194 } 00:18:40.194 } 00:18:40.194 ] 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "subsystem": "sock", 00:18:40.194 "config": [ 00:18:40.194 { 00:18:40.194 "method": "sock_set_default_impl", 00:18:40.194 "params": { 00:18:40.194 "impl_name": "posix" 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "sock_impl_set_options", 00:18:40.194 "params": { 00:18:40.194 "impl_name": "ssl", 00:18:40.194 "recv_buf_size": 4096, 00:18:40.194 "send_buf_size": 4096, 00:18:40.194 "enable_recv_pipe": true, 00:18:40.194 "enable_quickack": false, 00:18:40.194 "enable_placement_id": 0, 00:18:40.194 "enable_zerocopy_send_server": true, 00:18:40.194 "enable_zerocopy_send_client": false, 00:18:40.194 "zerocopy_threshold": 0, 00:18:40.194 "tls_version": 0, 00:18:40.194 "enable_ktls": false 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "sock_impl_set_options", 00:18:40.194 "params": { 00:18:40.194 "impl_name": "posix", 00:18:40.194 "recv_buf_size": 2097152, 00:18:40.194 "send_buf_size": 2097152, 00:18:40.194 "enable_recv_pipe": true, 00:18:40.194 "enable_quickack": false, 00:18:40.194 "enable_placement_id": 0, 00:18:40.194 "enable_zerocopy_send_server": true, 00:18:40.194 "enable_zerocopy_send_client": false, 00:18:40.194 "zerocopy_threshold": 0, 00:18:40.194 "tls_version": 0, 00:18:40.194 "enable_ktls": false 00:18:40.194 } 00:18:40.194 } 00:18:40.194 ] 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "subsystem": "vmd", 00:18:40.194 "config": [] 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "subsystem": "accel", 00:18:40.194 "config": [ 00:18:40.194 { 00:18:40.194 "method": "accel_set_options", 00:18:40.194 "params": { 00:18:40.194 "small_cache_size": 128, 00:18:40.194 "large_cache_size": 16, 00:18:40.194 "task_count": 2048, 00:18:40.194 "sequence_count": 2048, 00:18:40.194 "buf_count": 2048 00:18:40.194 } 00:18:40.194 } 00:18:40.194 ] 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "subsystem": "bdev", 00:18:40.194 "config": [ 00:18:40.194 { 00:18:40.194 "method": "bdev_set_options", 00:18:40.194 "params": { 00:18:40.194 "bdev_io_pool_size": 65535, 00:18:40.194 "bdev_io_cache_size": 256, 00:18:40.194 "bdev_auto_examine": true, 00:18:40.194 "iobuf_small_cache_size": 128, 00:18:40.194 "iobuf_large_cache_size": 16 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "bdev_raid_set_options", 00:18:40.194 "params": { 00:18:40.194 "process_window_size_kb": 1024, 00:18:40.194 "process_max_bandwidth_mb_sec": 0 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "bdev_iscsi_set_options", 00:18:40.194 "params": { 00:18:40.194 "timeout_sec": 30 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "bdev_nvme_set_options", 00:18:40.194 "params": { 00:18:40.194 "action_on_timeout": "none", 00:18:40.194 "timeout_us": 0, 00:18:40.194 "timeout_admin_us": 0, 00:18:40.194 "keep_alive_timeout_ms": 10000, 00:18:40.194 "arbitration_burst": 0, 00:18:40.194 "low_priority_weight": 0, 00:18:40.194 "medium_priority_weight": 0, 00:18:40.194 "high_priority_weight": 0, 00:18:40.194 "nvme_adminq_poll_period_us": 10000, 00:18:40.194 "nvme_ioq_poll_period_us": 0, 00:18:40.194 "io_queue_requests": 512, 00:18:40.194 "delay_cmd_submit": true, 00:18:40.194 "transport_retry_count": 4, 00:18:40.194 "bdev_retry_count": 3, 00:18:40.194 "transport_ack_timeout": 0, 00:18:40.194 "ctrlr_loss_timeout_sec": 0, 00:18:40.194 "reconnect_delay_sec": 0, 00:18:40.194 "fast_io_fail_timeout_sec": 0, 00:18:40.194 "disable_auto_failback": false, 00:18:40.194 "generate_uuids": false, 00:18:40.194 "transport_tos": 0, 00:18:40.194 "nvme_error_stat": false, 00:18:40.194 "rdma_srq_size": 0, 00:18:40.194 "io_path_stat": false, 00:18:40.194 "allow_accel_sequence": false, 00:18:40.194 "rdma_max_cq_size": 0, 00:18:40.194 "rdma_cm_event_timeout_ms": 0, 00:18:40.194 "dhchap_digests": [ 00:18:40.194 "sha256", 00:18:40.194 "sha384", 00:18:40.194 "sha512" 00:18:40.194 ], 00:18:40.194 "dhchap_dhgroups": [ 00:18:40.194 "null", 00:18:40.194 "ffdhe2048", 00:18:40.194 "ffdhe3072", 00:18:40.194 "ffdhe4096", 00:18:40.194 "ffdhe6144", 00:18:40.194 "ffdhe8192" 00:18:40.194 ] 00:18:40.194 } 00:18:40.194 }, 00:18:40.194 { 00:18:40.194 "method": "bdev_nvme_attach_controller", 00:18:40.194 "params": { 00:18:40.194 "name": "TLSTEST", 00:18:40.194 "trtype": "TCP", 00:18:40.194 "adrfam": "IPv4", 00:18:40.194 "traddr": "10.0.0.2", 00:18:40.194 "trsvcid": "4420", 00:18:40.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.194 "prchk_reftag": false, 00:18:40.195 "prchk_guard": false, 00:18:40.195 "ctrlr_loss_timeout_sec": 0, 00:18:40.195 "reconnect_delay_sec": 0, 00:18:40.195 "fast_io_fail_timeout_sec": 0, 00:18:40.195 "psk": "key0", 00:18:40.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.195 "hdgst": false, 00:18:40.195 "ddgst": false, 00:18:40.195 "multipath": "multipath" 00:18:40.195 } 00:18:40.195 }, 00:18:40.195 { 00:18:40.195 "method": "bdev_nvme_set_hotplug", 00:18:40.195 "params": { 00:18:40.195 "period_us": 100000, 00:18:40.195 "enable": false 00:18:40.195 } 00:18:40.195 }, 00:18:40.195 { 00:18:40.195 "method": "bdev_wait_for_examine" 00:18:40.195 } 00:18:40.195 ] 00:18:40.195 }, 00:18:40.195 { 00:18:40.195 "subsystem": "nbd", 00:18:40.195 "config": [] 00:18:40.195 } 00:18:40.195 ] 00:18:40.195 }' 00:18:40.195 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.195 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.195 [2024-10-15 12:59:00.377783] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:40.195 [2024-10-15 12:59:00.377826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238236 ] 00:18:40.195 [2024-10-15 12:59:00.445934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.195 [2024-10-15 12:59:00.487437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.454 [2024-10-15 12:59:00.637515] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.021 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.021 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:41.021 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.021 Running I/O for 10 seconds... 00:18:43.336 5469.00 IOPS, 21.36 MiB/s [2024-10-15T10:59:04.593Z] 5542.00 IOPS, 21.65 MiB/s [2024-10-15T10:59:05.530Z] 5542.00 IOPS, 21.65 MiB/s [2024-10-15T10:59:06.556Z] 5515.50 IOPS, 21.54 MiB/s [2024-10-15T10:59:07.511Z] 5527.60 IOPS, 21.59 MiB/s [2024-10-15T10:59:08.446Z] 5540.33 IOPS, 21.64 MiB/s [2024-10-15T10:59:09.382Z] 5540.86 IOPS, 21.64 MiB/s [2024-10-15T10:59:10.758Z] 5532.38 IOPS, 21.61 MiB/s [2024-10-15T10:59:11.327Z] 5545.00 IOPS, 21.66 MiB/s [2024-10-15T10:59:11.586Z] 5547.70 IOPS, 21.67 MiB/s 00:18:51.267 Latency(us) 00:18:51.267 [2024-10-15T10:59:11.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.267 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.267 Verification LBA range: start 0x0 length 0x2000 00:18:51.267 TLSTESTn1 : 10.02 5550.41 21.68 0.00 0.00 23025.25 5149.26 23842.62 00:18:51.267 [2024-10-15T10:59:11.586Z] =================================================================================================================== 00:18:51.267 [2024-10-15T10:59:11.586Z] Total : 5550.41 21.68 0.00 0.00 23025.25 5149.26 23842.62 00:18:51.267 { 00:18:51.267 "results": [ 00:18:51.267 { 00:18:51.268 "job": "TLSTESTn1", 00:18:51.268 "core_mask": "0x4", 00:18:51.268 "workload": "verify", 00:18:51.268 "status": "finished", 00:18:51.268 "verify_range": { 00:18:51.268 "start": 0, 00:18:51.268 "length": 8192 00:18:51.268 }, 00:18:51.268 "queue_depth": 128, 00:18:51.268 "io_size": 4096, 00:18:51.268 "runtime": 10.017813, 00:18:51.268 "iops": 5550.413049235397, 00:18:51.268 "mibps": 21.68130097357577, 00:18:51.268 "io_failed": 0, 00:18:51.268 "io_timeout": 0, 00:18:51.268 "avg_latency_us": 23025.24630315425, 00:18:51.268 "min_latency_us": 5149.257142857143, 00:18:51.268 "max_latency_us": 23842.620952380952 00:18:51.268 } 00:18:51.268 ], 00:18:51.268 "core_count": 1 00:18:51.268 } 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1238236 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1238236 ']' 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1238236 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238236 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238236' 00:18:51.268 killing process with pid 1238236 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1238236 00:18:51.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.268 00:18:51.268 Latency(us) 00:18:51.268 [2024-10-15T10:59:11.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.268 [2024-10-15T10:59:11.587Z] =================================================================================================================== 00:18:51.268 [2024-10-15T10:59:11.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1238236 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1237993 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1237993 ']' 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1237993 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.268 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237993 00:18:51.527 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:51.527 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:51.527 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237993' 00:18:51.527 killing process with pid 1237993 00:18:51.527 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1237993 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1237993 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1240088 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1240088 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1240088 ']' 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.528 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.528 [2024-10-15 12:59:11.843659] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:51.528 [2024-10-15 12:59:11.843714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.787 [2024-10-15 12:59:11.915657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.787 [2024-10-15 12:59:11.955880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.787 [2024-10-15 12:59:11.955914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.787 [2024-10-15 12:59:11.955921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.787 [2024-10-15 12:59:11.955927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.787 [2024-10-15 12:59:11.955932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.787 [2024-10-15 12:59:11.956476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.yPNDaNbpPe 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yPNDaNbpPe 00:18:51.787 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:52.046 [2024-10-15 12:59:12.247296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.046 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:52.304 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:52.563 [2024-10-15 12:59:12.656354] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.563 [2024-10-15 12:59:12.656559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.563 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.563 malloc0 00:18:52.823 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.823 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:53.082 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1240342 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1240342 /var/tmp/bdevperf.sock 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1240342 ']' 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.341 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.341 [2024-10-15 12:59:13.522448] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:53.341 [2024-10-15 12:59:13.522500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240342 ] 00:18:53.341 [2024-10-15 12:59:13.592014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.341 [2024-10-15 12:59:13.632810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.600 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.600 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:53.600 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:53.858 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.858 [2024-10-15 12:59:14.115201] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.117 nvme0n1 00:18:54.117 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.117 Running I/O for 1 seconds... 00:18:55.053 5308.00 IOPS, 20.73 MiB/s 00:18:55.053 Latency(us) 00:18:55.053 [2024-10-15T10:59:15.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.053 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.053 Verification LBA range: start 0x0 length 0x2000 00:18:55.053 nvme0n1 : 1.02 5347.00 20.89 0.00 0.00 23768.04 4962.01 49932.19 00:18:55.053 [2024-10-15T10:59:15.372Z] =================================================================================================================== 00:18:55.053 [2024-10-15T10:59:15.372Z] Total : 5347.00 20.89 0.00 0.00 23768.04 4962.01 49932.19 00:18:55.053 { 00:18:55.053 "results": [ 00:18:55.053 { 00:18:55.053 "job": "nvme0n1", 00:18:55.053 "core_mask": "0x2", 00:18:55.053 "workload": "verify", 00:18:55.053 "status": "finished", 00:18:55.053 "verify_range": { 00:18:55.053 "start": 0, 00:18:55.053 "length": 8192 00:18:55.053 }, 00:18:55.053 "queue_depth": 128, 00:18:55.053 "io_size": 4096, 00:18:55.053 "runtime": 1.016644, 00:18:55.053 "iops": 5347.004457804305, 00:18:55.053 "mibps": 20.886736163298067, 00:18:55.053 "io_failed": 0, 00:18:55.053 "io_timeout": 0, 00:18:55.053 "avg_latency_us": 23768.040713409722, 00:18:55.053 "min_latency_us": 4962.011428571429, 00:18:55.053 "max_latency_us": 49932.19047619047 00:18:55.053 } 00:18:55.053 ], 00:18:55.053 "core_count": 1 00:18:55.053 } 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1240342 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1240342 ']' 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1240342 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.053 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240342 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240342' 00:18:55.313 killing process with pid 1240342 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1240342 00:18:55.313 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.313 00:18:55.313 Latency(us) 00:18:55.313 [2024-10-15T10:59:15.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.313 [2024-10-15T10:59:15.632Z] =================================================================================================================== 00:18:55.313 [2024-10-15T10:59:15.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1240342 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1240088 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1240088 ']' 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1240088 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240088 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240088' 00:18:55.313 killing process with pid 1240088 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1240088 00:18:55.313 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1240088 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1240810 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1240810 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1240810 ']' 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.572 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.572 [2024-10-15 12:59:15.822619] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:55.572 [2024-10-15 12:59:15.822688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.572 [2024-10-15 12:59:15.891348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.832 [2024-10-15 12:59:15.926670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.832 [2024-10-15 12:59:15.926705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.832 [2024-10-15 12:59:15.926712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.832 [2024-10-15 12:59:15.926719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.832 [2024-10-15 12:59:15.926724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.832 [2024-10-15 12:59:15.927303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.832 [2024-10-15 12:59:16.069277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.832 malloc0 00:18:55.832 [2024-10-15 12:59:16.097391] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.832 [2024-10-15 12:59:16.097637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1240832 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1240832 /var/tmp/bdevperf.sock 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1240832 ']' 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.832 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.091 [2024-10-15 12:59:16.172034] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:56.091 [2024-10-15 12:59:16.172076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240832 ] 00:18:56.091 [2024-10-15 12:59:16.240874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.091 [2024-10-15 12:59:16.282450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.091 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.091 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:56.091 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPNDaNbpPe 00:18:56.350 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.610 [2024-10-15 12:59:16.741991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.610 nvme0n1 00:18:56.610 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.610 Running I/O for 1 seconds... 00:18:57.992 5392.00 IOPS, 21.06 MiB/s 00:18:57.992 Latency(us) 00:18:57.992 [2024-10-15T10:59:18.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.992 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.992 Verification LBA range: start 0x0 length 0x2000 00:18:57.992 nvme0n1 : 1.01 5454.69 21.31 0.00 0.00 23312.30 4805.97 20721.86 00:18:57.992 [2024-10-15T10:59:18.311Z] =================================================================================================================== 00:18:57.992 [2024-10-15T10:59:18.311Z] Total : 5454.69 21.31 0.00 0.00 23312.30 4805.97 20721.86 00:18:57.992 { 00:18:57.992 "results": [ 00:18:57.992 { 00:18:57.992 "job": "nvme0n1", 00:18:57.992 "core_mask": "0x2", 00:18:57.992 "workload": "verify", 00:18:57.992 "status": "finished", 00:18:57.992 "verify_range": { 00:18:57.992 "start": 0, 00:18:57.992 "length": 8192 00:18:57.992 }, 00:18:57.992 "queue_depth": 128, 00:18:57.992 "io_size": 4096, 00:18:57.992 "runtime": 1.012156, 00:18:57.992 "iops": 5454.692754871779, 00:18:57.992 "mibps": 21.307393573717885, 00:18:57.992 "io_failed": 0, 00:18:57.992 "io_timeout": 0, 00:18:57.992 "avg_latency_us": 23312.30339845266, 00:18:57.992 "min_latency_us": 4805.973333333333, 00:18:57.992 "max_latency_us": 20721.859047619047 00:18:57.992 } 00:18:57.992 ], 00:18:57.992 "core_count": 1 00:18:57.992 } 00:18:57.992 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:57.992 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.992 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.992 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.992 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:57.992 "subsystems": [ 00:18:57.992 { 00:18:57.992 "subsystem": "keyring", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "keyring_file_add_key", 00:18:57.993 "params": { 00:18:57.993 "name": "key0", 00:18:57.993 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "iobuf", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "iobuf_set_options", 00:18:57.993 "params": { 00:18:57.993 "small_pool_count": 8192, 00:18:57.993 "large_pool_count": 1024, 00:18:57.993 "small_bufsize": 8192, 00:18:57.993 "large_bufsize": 135168 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "sock", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "sock_set_default_impl", 00:18:57.993 "params": { 00:18:57.993 "impl_name": "posix" 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "sock_impl_set_options", 00:18:57.993 "params": { 00:18:57.993 "impl_name": "ssl", 00:18:57.993 "recv_buf_size": 4096, 00:18:57.993 "send_buf_size": 4096, 00:18:57.993 "enable_recv_pipe": true, 00:18:57.993 "enable_quickack": false, 00:18:57.993 "enable_placement_id": 0, 00:18:57.993 "enable_zerocopy_send_server": true, 00:18:57.993 "enable_zerocopy_send_client": false, 00:18:57.993 "zerocopy_threshold": 0, 00:18:57.993 "tls_version": 0, 00:18:57.993 "enable_ktls": false 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "sock_impl_set_options", 00:18:57.993 "params": { 00:18:57.993 "impl_name": "posix", 00:18:57.993 "recv_buf_size": 2097152, 00:18:57.993 "send_buf_size": 2097152, 00:18:57.993 "enable_recv_pipe": true, 00:18:57.993 "enable_quickack": false, 00:18:57.993 "enable_placement_id": 0, 00:18:57.993 "enable_zerocopy_send_server": true, 00:18:57.993 "enable_zerocopy_send_client": false, 00:18:57.993 "zerocopy_threshold": 0, 00:18:57.993 "tls_version": 0, 00:18:57.993 "enable_ktls": false 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "vmd", 00:18:57.993 "config": [] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "accel", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "accel_set_options", 00:18:57.993 "params": { 00:18:57.993 "small_cache_size": 128, 00:18:57.993 "large_cache_size": 16, 00:18:57.993 "task_count": 2048, 00:18:57.993 "sequence_count": 2048, 00:18:57.993 "buf_count": 2048 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "bdev", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "bdev_set_options", 00:18:57.993 "params": { 00:18:57.993 "bdev_io_pool_size": 65535, 00:18:57.993 "bdev_io_cache_size": 256, 00:18:57.993 "bdev_auto_examine": true, 00:18:57.993 "iobuf_small_cache_size": 128, 00:18:57.993 "iobuf_large_cache_size": 16 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_raid_set_options", 00:18:57.993 "params": { 00:18:57.993 "process_window_size_kb": 1024, 00:18:57.993 "process_max_bandwidth_mb_sec": 0 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_iscsi_set_options", 00:18:57.993 "params": { 00:18:57.993 "timeout_sec": 30 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_nvme_set_options", 00:18:57.993 "params": { 00:18:57.993 "action_on_timeout": "none", 00:18:57.993 "timeout_us": 0, 00:18:57.993 "timeout_admin_us": 0, 00:18:57.993 "keep_alive_timeout_ms": 10000, 00:18:57.993 "arbitration_burst": 0, 00:18:57.993 "low_priority_weight": 0, 00:18:57.993 "medium_priority_weight": 0, 00:18:57.993 "high_priority_weight": 0, 00:18:57.993 "nvme_adminq_poll_period_us": 10000, 00:18:57.993 "nvme_ioq_poll_period_us": 0, 00:18:57.993 "io_queue_requests": 0, 00:18:57.993 "delay_cmd_submit": true, 00:18:57.993 "transport_retry_count": 4, 00:18:57.993 "bdev_retry_count": 3, 00:18:57.993 "transport_ack_timeout": 0, 00:18:57.993 "ctrlr_loss_timeout_sec": 0, 00:18:57.993 "reconnect_delay_sec": 0, 00:18:57.993 "fast_io_fail_timeout_sec": 0, 00:18:57.993 "disable_auto_failback": false, 00:18:57.993 "generate_uuids": false, 00:18:57.993 "transport_tos": 0, 00:18:57.993 "nvme_error_stat": false, 00:18:57.993 "rdma_srq_size": 0, 00:18:57.993 "io_path_stat": false, 00:18:57.993 "allow_accel_sequence": false, 00:18:57.993 "rdma_max_cq_size": 0, 00:18:57.993 "rdma_cm_event_timeout_ms": 0, 00:18:57.993 "dhchap_digests": [ 00:18:57.993 "sha256", 00:18:57.993 "sha384", 00:18:57.993 "sha512" 00:18:57.993 ], 00:18:57.993 "dhchap_dhgroups": [ 00:18:57.993 "null", 00:18:57.993 "ffdhe2048", 00:18:57.993 "ffdhe3072", 00:18:57.993 "ffdhe4096", 00:18:57.993 "ffdhe6144", 00:18:57.993 "ffdhe8192" 00:18:57.993 ] 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_nvme_set_hotplug", 00:18:57.993 "params": { 00:18:57.993 "period_us": 100000, 00:18:57.993 "enable": false 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_malloc_create", 00:18:57.993 "params": { 00:18:57.993 "name": "malloc0", 00:18:57.993 "num_blocks": 8192, 00:18:57.993 "block_size": 4096, 00:18:57.993 "physical_block_size": 4096, 00:18:57.993 "uuid": "d0015d1d-b7a3-474e-829c-d35b39058f04", 00:18:57.993 "optimal_io_boundary": 0, 00:18:57.993 "md_size": 0, 00:18:57.993 "dif_type": 0, 00:18:57.993 "dif_is_head_of_md": false, 00:18:57.993 "dif_pi_format": 0 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "bdev_wait_for_examine" 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "nbd", 00:18:57.993 "config": [] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "scheduler", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "framework_set_scheduler", 00:18:57.993 "params": { 00:18:57.993 "name": "static" 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "subsystem": "nvmf", 00:18:57.993 "config": [ 00:18:57.993 { 00:18:57.993 "method": "nvmf_set_config", 00:18:57.993 "params": { 00:18:57.993 "discovery_filter": "match_any", 00:18:57.993 "admin_cmd_passthru": { 00:18:57.993 "identify_ctrlr": false 00:18:57.993 }, 00:18:57.993 "dhchap_digests": [ 00:18:57.993 "sha256", 00:18:57.993 "sha384", 00:18:57.993 "sha512" 00:18:57.993 ], 00:18:57.993 "dhchap_dhgroups": [ 00:18:57.993 "null", 00:18:57.993 "ffdhe2048", 00:18:57.993 "ffdhe3072", 00:18:57.993 "ffdhe4096", 00:18:57.993 "ffdhe6144", 00:18:57.993 "ffdhe8192" 00:18:57.993 ] 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_set_max_subsystems", 00:18:57.993 "params": { 00:18:57.993 "max_subsystems": 1024 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_set_crdt", 00:18:57.993 "params": { 00:18:57.993 "crdt1": 0, 00:18:57.993 "crdt2": 0, 00:18:57.993 "crdt3": 0 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_create_transport", 00:18:57.993 "params": { 00:18:57.993 "trtype": "TCP", 00:18:57.993 "max_queue_depth": 128, 00:18:57.993 "max_io_qpairs_per_ctrlr": 127, 00:18:57.993 "in_capsule_data_size": 4096, 00:18:57.993 "max_io_size": 131072, 00:18:57.993 "io_unit_size": 131072, 00:18:57.993 "max_aq_depth": 128, 00:18:57.993 "num_shared_buffers": 511, 00:18:57.993 "buf_cache_size": 4294967295, 00:18:57.993 "dif_insert_or_strip": false, 00:18:57.993 "zcopy": false, 00:18:57.993 "c2h_success": false, 00:18:57.993 "sock_priority": 0, 00:18:57.993 "abort_timeout_sec": 1, 00:18:57.993 "ack_timeout": 0, 00:18:57.993 "data_wr_pool_size": 0 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_create_subsystem", 00:18:57.993 "params": { 00:18:57.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.993 "allow_any_host": false, 00:18:57.993 "serial_number": "00000000000000000000", 00:18:57.993 "model_number": "SPDK bdev Controller", 00:18:57.993 "max_namespaces": 32, 00:18:57.993 "min_cntlid": 1, 00:18:57.993 "max_cntlid": 65519, 00:18:57.993 "ana_reporting": false 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_subsystem_add_host", 00:18:57.993 "params": { 00:18:57.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.993 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.993 "psk": "key0" 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_subsystem_add_ns", 00:18:57.993 "params": { 00:18:57.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.993 "namespace": { 00:18:57.993 "nsid": 1, 00:18:57.993 "bdev_name": "malloc0", 00:18:57.993 "nguid": "D0015D1DB7A3474E829CD35B39058F04", 00:18:57.993 "uuid": "d0015d1d-b7a3-474e-829c-d35b39058f04", 00:18:57.993 "no_auto_visible": false 00:18:57.993 } 00:18:57.993 } 00:18:57.993 }, 00:18:57.993 { 00:18:57.993 "method": "nvmf_subsystem_add_listener", 00:18:57.993 "params": { 00:18:57.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.993 "listen_address": { 00:18:57.993 "trtype": "TCP", 00:18:57.993 "adrfam": "IPv4", 00:18:57.993 "traddr": "10.0.0.2", 00:18:57.993 "trsvcid": "4420" 00:18:57.993 }, 00:18:57.993 "secure_channel": false, 00:18:57.993 "sock_impl": "ssl" 00:18:57.993 } 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 } 00:18:57.993 ] 00:18:57.993 }' 00:18:57.994 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:58.253 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:58.253 "subsystems": [ 00:18:58.253 { 00:18:58.253 "subsystem": "keyring", 00:18:58.253 "config": [ 00:18:58.253 { 00:18:58.253 "method": "keyring_file_add_key", 00:18:58.253 "params": { 00:18:58.253 "name": "key0", 00:18:58.253 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:58.253 } 00:18:58.253 } 00:18:58.253 ] 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "subsystem": "iobuf", 00:18:58.253 "config": [ 00:18:58.253 { 00:18:58.253 "method": "iobuf_set_options", 00:18:58.253 "params": { 00:18:58.253 "small_pool_count": 8192, 00:18:58.253 "large_pool_count": 1024, 00:18:58.253 "small_bufsize": 8192, 00:18:58.253 "large_bufsize": 135168 00:18:58.253 } 00:18:58.253 } 00:18:58.253 ] 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "subsystem": "sock", 00:18:58.253 "config": [ 00:18:58.253 { 00:18:58.253 "method": "sock_set_default_impl", 00:18:58.253 "params": { 00:18:58.253 "impl_name": "posix" 00:18:58.253 } 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "method": "sock_impl_set_options", 00:18:58.253 "params": { 00:18:58.253 "impl_name": "ssl", 00:18:58.253 "recv_buf_size": 4096, 00:18:58.253 "send_buf_size": 4096, 00:18:58.253 "enable_recv_pipe": true, 00:18:58.253 "enable_quickack": false, 00:18:58.253 "enable_placement_id": 0, 00:18:58.253 "enable_zerocopy_send_server": true, 00:18:58.253 "enable_zerocopy_send_client": false, 00:18:58.253 "zerocopy_threshold": 0, 00:18:58.253 "tls_version": 0, 00:18:58.253 "enable_ktls": false 00:18:58.253 } 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "method": "sock_impl_set_options", 00:18:58.253 "params": { 00:18:58.253 "impl_name": "posix", 00:18:58.253 "recv_buf_size": 2097152, 00:18:58.253 "send_buf_size": 2097152, 00:18:58.253 "enable_recv_pipe": true, 00:18:58.253 "enable_quickack": false, 00:18:58.253 "enable_placement_id": 0, 00:18:58.253 "enable_zerocopy_send_server": true, 00:18:58.253 "enable_zerocopy_send_client": false, 00:18:58.253 "zerocopy_threshold": 0, 00:18:58.253 "tls_version": 0, 00:18:58.253 "enable_ktls": false 00:18:58.253 } 00:18:58.253 } 00:18:58.253 ] 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "subsystem": "vmd", 00:18:58.253 "config": [] 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "subsystem": "accel", 00:18:58.253 "config": [ 00:18:58.253 { 00:18:58.253 "method": "accel_set_options", 00:18:58.253 "params": { 00:18:58.253 "small_cache_size": 128, 00:18:58.253 "large_cache_size": 16, 00:18:58.253 "task_count": 2048, 00:18:58.253 "sequence_count": 2048, 00:18:58.253 "buf_count": 2048 00:18:58.253 } 00:18:58.253 } 00:18:58.253 ] 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "subsystem": "bdev", 00:18:58.253 "config": [ 00:18:58.253 { 00:18:58.253 "method": "bdev_set_options", 00:18:58.253 "params": { 00:18:58.253 "bdev_io_pool_size": 65535, 00:18:58.253 "bdev_io_cache_size": 256, 00:18:58.253 "bdev_auto_examine": true, 00:18:58.253 "iobuf_small_cache_size": 128, 00:18:58.253 "iobuf_large_cache_size": 16 00:18:58.253 } 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "method": "bdev_raid_set_options", 00:18:58.253 "params": { 00:18:58.253 "process_window_size_kb": 1024, 00:18:58.253 "process_max_bandwidth_mb_sec": 0 00:18:58.253 } 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "method": "bdev_iscsi_set_options", 00:18:58.253 "params": { 00:18:58.253 "timeout_sec": 30 00:18:58.253 } 00:18:58.253 }, 00:18:58.253 { 00:18:58.253 "method": "bdev_nvme_set_options", 00:18:58.253 "params": { 00:18:58.253 "action_on_timeout": "none", 00:18:58.253 "timeout_us": 0, 00:18:58.253 "timeout_admin_us": 0, 00:18:58.253 "keep_alive_timeout_ms": 10000, 00:18:58.253 "arbitration_burst": 0, 00:18:58.253 "low_priority_weight": 0, 00:18:58.253 "medium_priority_weight": 0, 00:18:58.253 "high_priority_weight": 0, 00:18:58.253 "nvme_adminq_poll_period_us": 10000, 00:18:58.253 "nvme_ioq_poll_period_us": 0, 00:18:58.253 "io_queue_requests": 512, 00:18:58.253 "delay_cmd_submit": true, 00:18:58.253 "transport_retry_count": 4, 00:18:58.253 "bdev_retry_count": 3, 00:18:58.253 "transport_ack_timeout": 0, 00:18:58.253 "ctrlr_loss_timeout_sec": 0, 00:18:58.253 "reconnect_delay_sec": 0, 00:18:58.253 "fast_io_fail_timeout_sec": 0, 00:18:58.253 "disable_auto_failback": false, 00:18:58.253 "generate_uuids": false, 00:18:58.253 "transport_tos": 0, 00:18:58.253 "nvme_error_stat": false, 00:18:58.253 "rdma_srq_size": 0, 00:18:58.253 "io_path_stat": false, 00:18:58.253 "allow_accel_sequence": false, 00:18:58.254 "rdma_max_cq_size": 0, 00:18:58.254 "rdma_cm_event_timeout_ms": 0, 00:18:58.254 "dhchap_digests": [ 00:18:58.254 "sha256", 00:18:58.254 "sha384", 00:18:58.254 "sha512" 00:18:58.254 ], 00:18:58.254 "dhchap_dhgroups": [ 00:18:58.254 "null", 00:18:58.254 "ffdhe2048", 00:18:58.254 "ffdhe3072", 00:18:58.254 "ffdhe4096", 00:18:58.254 "ffdhe6144", 00:18:58.254 "ffdhe8192" 00:18:58.254 ] 00:18:58.254 } 00:18:58.254 }, 00:18:58.254 { 00:18:58.254 "method": "bdev_nvme_attach_controller", 00:18:58.254 "params": { 00:18:58.254 "name": "nvme0", 00:18:58.254 "trtype": "TCP", 00:18:58.254 "adrfam": "IPv4", 00:18:58.254 "traddr": "10.0.0.2", 00:18:58.254 "trsvcid": "4420", 00:18:58.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.254 "prchk_reftag": false, 00:18:58.254 "prchk_guard": false, 00:18:58.254 "ctrlr_loss_timeout_sec": 0, 00:18:58.254 "reconnect_delay_sec": 0, 00:18:58.254 "fast_io_fail_timeout_sec": 0, 00:18:58.254 "psk": "key0", 00:18:58.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.254 "hdgst": false, 00:18:58.254 "ddgst": false, 00:18:58.254 "multipath": "multipath" 00:18:58.254 } 00:18:58.254 }, 00:18:58.254 { 00:18:58.254 "method": "bdev_nvme_set_hotplug", 00:18:58.254 "params": { 00:18:58.254 "period_us": 100000, 00:18:58.254 "enable": false 00:18:58.254 } 00:18:58.254 }, 00:18:58.254 { 00:18:58.254 "method": "bdev_enable_histogram", 00:18:58.254 "params": { 00:18:58.254 "name": "nvme0n1", 00:18:58.254 "enable": true 00:18:58.254 } 00:18:58.254 }, 00:18:58.254 { 00:18:58.254 "method": "bdev_wait_for_examine" 00:18:58.254 } 00:18:58.254 ] 00:18:58.254 }, 00:18:58.254 { 00:18:58.254 "subsystem": "nbd", 00:18:58.254 "config": [] 00:18:58.254 } 00:18:58.254 ] 00:18:58.254 }' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1240832 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1240832 ']' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1240832 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240832 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240832' 00:18:58.254 killing process with pid 1240832 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1240832 00:18:58.254 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.254 00:18:58.254 Latency(us) 00:18:58.254 [2024-10-15T10:59:18.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.254 [2024-10-15T10:59:18.573Z] =================================================================================================================== 00:18:58.254 [2024-10-15T10:59:18.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1240832 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1240810 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1240810 ']' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1240810 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.254 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240810 00:18:58.513 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240810' 00:18:58.514 killing process with pid 1240810 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1240810 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1240810 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.514 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:58.514 "subsystems": [ 00:18:58.514 { 00:18:58.514 "subsystem": "keyring", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "keyring_file_add_key", 00:18:58.514 "params": { 00:18:58.514 "name": "key0", 00:18:58.514 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:58.514 } 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "iobuf", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "iobuf_set_options", 00:18:58.514 "params": { 00:18:58.514 "small_pool_count": 8192, 00:18:58.514 "large_pool_count": 1024, 00:18:58.514 "small_bufsize": 8192, 00:18:58.514 "large_bufsize": 135168 00:18:58.514 } 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "sock", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "sock_set_default_impl", 00:18:58.514 "params": { 00:18:58.514 "impl_name": "posix" 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "sock_impl_set_options", 00:18:58.514 "params": { 00:18:58.514 "impl_name": "ssl", 00:18:58.514 "recv_buf_size": 4096, 00:18:58.514 "send_buf_size": 4096, 00:18:58.514 "enable_recv_pipe": true, 00:18:58.514 "enable_quickack": false, 00:18:58.514 "enable_placement_id": 0, 00:18:58.514 "enable_zerocopy_send_server": true, 00:18:58.514 "enable_zerocopy_send_client": false, 00:18:58.514 "zerocopy_threshold": 0, 00:18:58.514 "tls_version": 0, 00:18:58.514 "enable_ktls": false 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "sock_impl_set_options", 00:18:58.514 "params": { 00:18:58.514 "impl_name": "posix", 00:18:58.514 "recv_buf_size": 2097152, 00:18:58.514 "send_buf_size": 2097152, 00:18:58.514 "enable_recv_pipe": true, 00:18:58.514 "enable_quickack": false, 00:18:58.514 "enable_placement_id": 0, 00:18:58.514 "enable_zerocopy_send_server": true, 00:18:58.514 "enable_zerocopy_send_client": false, 00:18:58.514 "zerocopy_threshold": 0, 00:18:58.514 "tls_version": 0, 00:18:58.514 "enable_ktls": false 00:18:58.514 } 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "vmd", 00:18:58.514 "config": [] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "accel", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "accel_set_options", 00:18:58.514 "params": { 00:18:58.514 "small_cache_size": 128, 00:18:58.514 "large_cache_size": 16, 00:18:58.514 "task_count": 2048, 00:18:58.514 "sequence_count": 2048, 00:18:58.514 "buf_count": 2048 00:18:58.514 } 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "bdev", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "bdev_set_options", 00:18:58.514 "params": { 00:18:58.514 "bdev_io_pool_size": 65535, 00:18:58.514 "bdev_io_cache_size": 256, 00:18:58.514 "bdev_auto_examine": true, 00:18:58.514 "iobuf_small_cache_size": 128, 00:18:58.514 "iobuf_large_cache_size": 16 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_raid_set_options", 00:18:58.514 "params": { 00:18:58.514 "process_window_size_kb": 1024, 00:18:58.514 "process_max_bandwidth_mb_sec": 0 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_iscsi_set_options", 00:18:58.514 "params": { 00:18:58.514 "timeout_sec": 30 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_nvme_set_options", 00:18:58.514 "params": { 00:18:58.514 "action_on_timeout": "none", 00:18:58.514 "timeout_us": 0, 00:18:58.514 "timeout_admin_us": 0, 00:18:58.514 "keep_alive_timeout_ms": 10000, 00:18:58.514 "arbitration_burst": 0, 00:18:58.514 "low_priority_weight": 0, 00:18:58.514 "medium_priority_weight": 0, 00:18:58.514 "high_priority_weight": 0, 00:18:58.514 "nvme_adminq_poll_period_us": 10000, 00:18:58.514 "nvme_ioq_poll_period_us": 0, 00:18:58.514 "io_queue_requests": 0, 00:18:58.514 "delay_cmd_submit": true, 00:18:58.514 "transport_retry_count": 4, 00:18:58.514 "bdev_retry_count": 3, 00:18:58.514 "transport_ack_timeout": 0, 00:18:58.514 "ctrlr_loss_timeout_sec": 0, 00:18:58.514 "reconnect_delay_sec": 0, 00:18:58.514 "fast_io_fail_timeout_sec": 0, 00:18:58.514 "disable_auto_failback": false, 00:18:58.514 "generate_uuids": false, 00:18:58.514 "transport_tos": 0, 00:18:58.514 "nvme_error_stat": false, 00:18:58.514 "rdma_srq_size": 0, 00:18:58.514 "io_path_stat": false, 00:18:58.514 "allow_accel_sequence": false, 00:18:58.514 "rdma_max_cq_size": 0, 00:18:58.514 "rdma_cm_event_timeout_ms": 0, 00:18:58.514 "dhchap_digests": [ 00:18:58.514 "sha256", 00:18:58.514 "sha384", 00:18:58.514 "sha512" 00:18:58.514 ], 00:18:58.514 "dhchap_dhgroups": [ 00:18:58.514 "null", 00:18:58.514 "ffdhe2048", 00:18:58.514 "ffdhe3072", 00:18:58.514 "ffdhe4096", 00:18:58.514 "ffdhe6144", 00:18:58.514 "ffdhe8192" 00:18:58.514 ] 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_nvme_set_hotplug", 00:18:58.514 "params": { 00:18:58.514 "period_us": 100000, 00:18:58.514 "enable": false 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_malloc_create", 00:18:58.514 "params": { 00:18:58.514 "name": "malloc0", 00:18:58.514 "num_blocks": 8192, 00:18:58.514 "block_size": 4096, 00:18:58.514 "physical_block_size": 4096, 00:18:58.514 "uuid": "d0015d1d-b7a3-474e-829c-d35b39058f04", 00:18:58.514 "optimal_io_boundary": 0, 00:18:58.514 "md_size": 0, 00:18:58.514 "dif_type": 0, 00:18:58.514 "dif_is_head_of_md": false, 00:18:58.514 "dif_pi_format": 0 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "bdev_wait_for_examine" 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "nbd", 00:18:58.514 "config": [] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "scheduler", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "framework_set_scheduler", 00:18:58.514 "params": { 00:18:58.514 "name": "static" 00:18:58.514 } 00:18:58.514 } 00:18:58.514 ] 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "subsystem": "nvmf", 00:18:58.514 "config": [ 00:18:58.514 { 00:18:58.514 "method": "nvmf_set_config", 00:18:58.514 "params": { 00:18:58.514 "discovery_filter": "match_any", 00:18:58.514 "admin_cmd_passthru": { 00:18:58.514 "identify_ctrlr": false 00:18:58.514 }, 00:18:58.514 "dhchap_digests": [ 00:18:58.514 "sha256", 00:18:58.514 "sha384", 00:18:58.514 "sha512" 00:18:58.514 ], 00:18:58.514 "dhchap_dhgroups": [ 00:18:58.514 "null", 00:18:58.514 "ffdhe2048", 00:18:58.514 "ffdhe3072", 00:18:58.514 "ffdhe4096", 00:18:58.514 "ffdhe6144", 00:18:58.514 "ffdhe8192" 00:18:58.514 ] 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.514 "method": "nvmf_set_max_subsystems", 00:18:58.514 "params": { 00:18:58.514 "max_subsystems": 1024 00:18:58.514 } 00:18:58.514 }, 00:18:58.514 { 00:18:58.515 "method": "nvmf_set_crdt", 00:18:58.515 "params": { 00:18:58.515 "crdt1": 0, 00:18:58.515 "crdt2": 0, 00:18:58.515 "crdt3": 0 00:18:58.515 } 00:18:58.515 }, 00:18:58.515 { 00:18:58.515 "method": "nvmf_create_transport", 00:18:58.515 "params": { 00:18:58.515 "trtype": "TCP", 00:18:58.515 "max_queue_depth": 128, 00:18:58.515 "max_io_qpairs_per_ctrlr": 127, 00:18:58.515 "in_capsule_data_size": 4096, 00:18:58.515 "max_io_size": 131072, 00:18:58.515 "io_unit_size": 131072, 00:18:58.515 "max_aq_depth": 128, 00:18:58.515 "num_shared_buffers": 511, 00:18:58.515 "buf_cache_size": 4294967295, 00:18:58.515 "dif_insert_or_strip": false, 00:18:58.515 "zcopy": false, 00:18:58.515 "c2h_success": false, 00:18:58.515 "sock_priority": 0, 00:18:58.515 "abort_timeout_sec": 1, 00:18:58.515 "ack_timeout": 0, 00:18:58.515 "data_wr_pool_size": 0 00:18:58.515 } 00:18:58.515 }, 00:18:58.515 { 00:18:58.515 "method": "nvmf_create_subsystem", 00:18:58.515 "params": { 00:18:58.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.515 "allow_any_host": false, 00:18:58.515 "serial_number": "00000000000000000000", 00:18:58.515 "model_number": "SPDK bdev Controller", 00:18:58.515 "max_namespaces": 32, 00:18:58.515 "min_cntlid": 1, 00:18:58.515 "max_cntlid": 65519, 00:18:58.515 "ana_reporting": false 00:18:58.515 } 00:18:58.515 }, 00:18:58.515 { 00:18:58.515 "method": "nvmf_subsystem_add_host", 00:18:58.515 "params": { 00:18:58.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.515 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.515 "psk": "key0" 00:18:58.515 } 00:18:58.515 }, 00:18:58.515 { 00:18:58.515 "method": "nvmf_subsystem_add_ns", 00:18:58.515 "params": { 00:18:58.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.515 "namespace": { 00:18:58.515 "nsid": 1, 00:18:58.515 "bdev_name": "malloc0", 00:18:58.515 "nguid": "D0015D1DB7A3474E829CD35B39058F04", 00:18:58.515 "uuid": "d0015d1d-b7a3-474e-829c-d35b39058f04", 00:18:58.515 "no_auto_visible": false 00:18:58.515 } 00:18:58.515 } 00:18:58.515 }, 00:18:58.515 { 00:18:58.515 "method": "nvmf_subsystem_add_listener", 00:18:58.515 "params": { 00:18:58.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.515 "listen_address": { 00:18:58.515 "trtype": "TCP", 00:18:58.515 "adrfam": "IPv4", 00:18:58.515 "traddr": "10.0.0.2", 00:18:58.515 "trsvcid": "4420" 00:18:58.515 }, 00:18:58.515 "secure_channel": false, 00:18:58.515 "sock_impl": "ssl" 00:18:58.515 } 00:18:58.515 } 00:18:58.515 ] 00:18:58.515 } 00:18:58.515 ] 00:18:58.515 }' 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1241319 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1241319 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241319 ']' 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.515 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.515 [2024-10-15 12:59:18.819473] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:58.515 [2024-10-15 12:59:18.819520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.774 [2024-10-15 12:59:18.891996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.774 [2024-10-15 12:59:18.931846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.774 [2024-10-15 12:59:18.931881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.774 [2024-10-15 12:59:18.931889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.774 [2024-10-15 12:59:18.931895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.774 [2024-10-15 12:59:18.931900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.774 [2024-10-15 12:59:18.932447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.033 [2024-10-15 12:59:19.143010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.033 [2024-10-15 12:59:19.175049] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.033 [2024-10-15 12:59:19.175262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.604 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1241470 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1241470 /var/tmp/bdevperf.sock 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241470 ']' 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:59.605 "subsystems": [ 00:18:59.605 { 00:18:59.605 "subsystem": "keyring", 00:18:59.605 "config": [ 00:18:59.605 { 00:18:59.605 "method": "keyring_file_add_key", 00:18:59.605 "params": { 00:18:59.605 "name": "key0", 00:18:59.605 "path": "/tmp/tmp.yPNDaNbpPe" 00:18:59.605 } 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "iobuf", 00:18:59.605 "config": [ 00:18:59.605 { 00:18:59.605 "method": "iobuf_set_options", 00:18:59.605 "params": { 00:18:59.605 "small_pool_count": 8192, 00:18:59.605 "large_pool_count": 1024, 00:18:59.605 "small_bufsize": 8192, 00:18:59.605 "large_bufsize": 135168 00:18:59.605 } 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "sock", 00:18:59.605 "config": [ 00:18:59.605 { 00:18:59.605 "method": "sock_set_default_impl", 00:18:59.605 "params": { 00:18:59.605 "impl_name": "posix" 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "sock_impl_set_options", 00:18:59.605 "params": { 00:18:59.605 "impl_name": "ssl", 00:18:59.605 "recv_buf_size": 4096, 00:18:59.605 "send_buf_size": 4096, 00:18:59.605 "enable_recv_pipe": true, 00:18:59.605 "enable_quickack": false, 00:18:59.605 "enable_placement_id": 0, 00:18:59.605 "enable_zerocopy_send_server": true, 00:18:59.605 "enable_zerocopy_send_client": false, 00:18:59.605 "zerocopy_threshold": 0, 00:18:59.605 "tls_version": 0, 00:18:59.605 "enable_ktls": false 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "sock_impl_set_options", 00:18:59.605 "params": { 00:18:59.605 "impl_name": "posix", 00:18:59.605 "recv_buf_size": 2097152, 00:18:59.605 "send_buf_size": 2097152, 00:18:59.605 "enable_recv_pipe": true, 00:18:59.605 "enable_quickack": false, 00:18:59.605 "enable_placement_id": 0, 00:18:59.605 "enable_zerocopy_send_server": true, 00:18:59.605 "enable_zerocopy_send_client": false, 00:18:59.605 "zerocopy_threshold": 0, 00:18:59.605 "tls_version": 0, 00:18:59.605 "enable_ktls": false 00:18:59.605 } 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "vmd", 00:18:59.605 "config": [] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "accel", 00:18:59.605 "config": [ 00:18:59.605 { 00:18:59.605 "method": "accel_set_options", 00:18:59.605 "params": { 00:18:59.605 "small_cache_size": 128, 00:18:59.605 "large_cache_size": 16, 00:18:59.605 "task_count": 2048, 00:18:59.605 "sequence_count": 2048, 00:18:59.605 "buf_count": 2048 00:18:59.605 } 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "bdev", 00:18:59.605 "config": [ 00:18:59.605 { 00:18:59.605 "method": "bdev_set_options", 00:18:59.605 "params": { 00:18:59.605 "bdev_io_pool_size": 65535, 00:18:59.605 "bdev_io_cache_size": 256, 00:18:59.605 "bdev_auto_examine": true, 00:18:59.605 "iobuf_small_cache_size": 128, 00:18:59.605 "iobuf_large_cache_size": 16 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_raid_set_options", 00:18:59.605 "params": { 00:18:59.605 "process_window_size_kb": 1024, 00:18:59.605 "process_max_bandwidth_mb_sec": 0 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_iscsi_set_options", 00:18:59.605 "params": { 00:18:59.605 "timeout_sec": 30 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_nvme_set_options", 00:18:59.605 "params": { 00:18:59.605 "action_on_timeout": "none", 00:18:59.605 "timeout_us": 0, 00:18:59.605 "timeout_admin_us": 0, 00:18:59.605 "keep_alive_timeout_ms": 10000, 00:18:59.605 "arbitration_burst": 0, 00:18:59.605 "low_priority_weight": 0, 00:18:59.605 "medium_priority_weight": 0, 00:18:59.605 "high_priority_weight": 0, 00:18:59.605 "nvme_adminq_poll_period_us": 10000, 00:18:59.605 "nvme_ioq_poll_period_us": 0, 00:18:59.605 "io_queue_requests": 512, 00:18:59.605 "delay_cmd_submit": true, 00:18:59.605 "transport_retry_count": 4, 00:18:59.605 "bdev_retry_count": 3, 00:18:59.605 "transport_ack_timeout": 0, 00:18:59.605 "ctrlr_loss_timeout_sec": 0, 00:18:59.605 "reconnect_delay_sec": 0, 00:18:59.605 "fast_io_fail_timeout_sec": 0, 00:18:59.605 "disable_auto_failback": false, 00:18:59.605 "generate_uuids": false, 00:18:59.605 "transport_tos": 0, 00:18:59.605 "nvme_error_stat": false, 00:18:59.605 "rdma_srq_size": 0, 00:18:59.605 "io_path_stat": false, 00:18:59.605 "allow_accel_sequence": false, 00:18:59.605 "rdma_max_cq_size": 0, 00:18:59.605 "rdma_cm_event_timeout_ms": 0, 00:18:59.605 "dhchap_digests": [ 00:18:59.605 "sha256", 00:18:59.605 "sha384", 00:18:59.605 "sha512" 00:18:59.605 ], 00:18:59.605 "dhchap_dhgroups": [ 00:18:59.605 "null", 00:18:59.605 "ffdhe2048", 00:18:59.605 "ffdhe3072", 00:18:59.605 "ffdhe4096", 00:18:59.605 "ffdhe6144", 00:18:59.605 "ffdhe8192" 00:18:59.605 ] 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_nvme_attach_controller", 00:18:59.605 "params": { 00:18:59.605 "name": "nvme0", 00:18:59.605 "trtype": "TCP", 00:18:59.605 "adrfam": "IPv4", 00:18:59.605 "traddr": "10.0.0.2", 00:18:59.605 "trsvcid": "4420", 00:18:59.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.605 "prchk_reftag": false, 00:18:59.605 "prchk_guard": false, 00:18:59.605 "ctrlr_loss_timeout_sec": 0, 00:18:59.605 "reconnect_delay_sec": 0, 00:18:59.605 "fast_io_fail_timeout_sec": 0, 00:18:59.605 "psk": "key0", 00:18:59.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.605 "hdgst": false, 00:18:59.605 "ddgst": false, 00:18:59.605 "multipath": "multipath" 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_nvme_set_hotplug", 00:18:59.605 "params": { 00:18:59.605 "period_us": 100000, 00:18:59.605 "enable": false 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_enable_histogram", 00:18:59.605 "params": { 00:18:59.605 "name": "nvme0n1", 00:18:59.605 "enable": true 00:18:59.605 } 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "method": "bdev_wait_for_examine" 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }, 00:18:59.605 { 00:18:59.605 "subsystem": "nbd", 00:18:59.605 "config": [] 00:18:59.605 } 00:18:59.605 ] 00:18:59.605 }' 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.605 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.605 [2024-10-15 12:59:19.731212] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:18:59.605 [2024-10-15 12:59:19.731257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241470 ] 00:18:59.605 [2024-10-15 12:59:19.800340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.605 [2024-10-15 12:59:19.842231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.865 [2024-10-15 12:59:19.994611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.433 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.433 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:00.433 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.433 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:00.692 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.692 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.692 Running I/O for 1 seconds... 00:19:01.630 5152.00 IOPS, 20.12 MiB/s 00:19:01.630 Latency(us) 00:19:01.630 [2024-10-15T10:59:21.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.630 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.630 Verification LBA range: start 0x0 length 0x2000 00:19:01.630 nvme0n1 : 1.01 5207.16 20.34 0.00 0.00 24414.03 5773.41 64911.85 00:19:01.630 [2024-10-15T10:59:21.949Z] =================================================================================================================== 00:19:01.630 [2024-10-15T10:59:21.949Z] Total : 5207.16 20.34 0.00 0.00 24414.03 5773.41 64911.85 00:19:01.630 { 00:19:01.630 "results": [ 00:19:01.630 { 00:19:01.630 "job": "nvme0n1", 00:19:01.630 "core_mask": "0x2", 00:19:01.630 "workload": "verify", 00:19:01.630 "status": "finished", 00:19:01.630 "verify_range": { 00:19:01.630 "start": 0, 00:19:01.630 "length": 8192 00:19:01.630 }, 00:19:01.630 "queue_depth": 128, 00:19:01.630 "io_size": 4096, 00:19:01.630 "runtime": 1.014181, 00:19:01.630 "iops": 5207.157302296138, 00:19:01.630 "mibps": 20.34045821209429, 00:19:01.630 "io_failed": 0, 00:19:01.630 "io_timeout": 0, 00:19:01.630 "avg_latency_us": 24414.03180710724, 00:19:01.630 "min_latency_us": 5773.409523809524, 00:19:01.630 "max_latency_us": 64911.84761904762 00:19:01.630 } 00:19:01.630 ], 00:19:01.630 "core_count": 1 00:19:01.630 } 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:01.630 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:01.630 nvmf_trace.0 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1241470 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241470 ']' 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241470 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.889 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241470 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241470' 00:19:01.889 killing process with pid 1241470 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241470 00:19:01.889 Received shutdown signal, test time was about 1.000000 seconds 00:19:01.889 00:19:01.889 Latency(us) 00:19:01.889 [2024-10-15T10:59:22.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.889 [2024-10-15T10:59:22.208Z] =================================================================================================================== 00:19:01.889 [2024-10-15T10:59:22.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241470 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.889 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.889 rmmod nvme_tcp 00:19:01.889 rmmod nvme_fabrics 00:19:02.148 rmmod nvme_keyring 00:19:02.148 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.148 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:02.148 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:02.148 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1241319 ']' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241319 ']' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241319' 00:19:02.149 killing process with pid 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241319 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.149 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.685 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3jyfDdop48 /tmp/tmp.eivpGCQbYX /tmp/tmp.yPNDaNbpPe 00:19:04.686 00:19:04.686 real 1m19.225s 00:19:04.686 user 2m1.886s 00:19:04.686 sys 0m29.843s 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 ************************************ 00:19:04.686 END TEST nvmf_tls 00:19:04.686 ************************************ 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 ************************************ 00:19:04.686 START TEST nvmf_fips 00:19:04.686 ************************************ 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:04.686 * Looking for test storage... 00:19:04.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:04.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.686 --rc genhtml_branch_coverage=1 00:19:04.686 --rc genhtml_function_coverage=1 00:19:04.686 --rc genhtml_legend=1 00:19:04.686 --rc geninfo_all_blocks=1 00:19:04.686 --rc geninfo_unexecuted_blocks=1 00:19:04.686 00:19:04.686 ' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:04.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.686 --rc genhtml_branch_coverage=1 00:19:04.686 --rc genhtml_function_coverage=1 00:19:04.686 --rc genhtml_legend=1 00:19:04.686 --rc geninfo_all_blocks=1 00:19:04.686 --rc geninfo_unexecuted_blocks=1 00:19:04.686 00:19:04.686 ' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:04.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.686 --rc genhtml_branch_coverage=1 00:19:04.686 --rc genhtml_function_coverage=1 00:19:04.686 --rc genhtml_legend=1 00:19:04.686 --rc geninfo_all_blocks=1 00:19:04.686 --rc geninfo_unexecuted_blocks=1 00:19:04.686 00:19:04.686 ' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:04.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.686 --rc genhtml_branch_coverage=1 00:19:04.686 --rc genhtml_function_coverage=1 00:19:04.686 --rc genhtml_legend=1 00:19:04.686 --rc geninfo_all_blocks=1 00:19:04.686 --rc geninfo_unexecuted_blocks=1 00:19:04.686 00:19:04.686 ' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.686 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:04.687 Error setting digest 00:19:04.687 40322168E47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:04.687 40322168E47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:04.687 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:11.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:11.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:11.259 Found net devices under 0000:86:00.0: cvl_0_0 00:19:11.259 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:11.260 Found net devices under 0000:86:00.1: cvl_0_1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:19:11.260 00:19:11.260 --- 10.0.0.2 ping statistics --- 00:19:11.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.260 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:19:11.260 00:19:11.260 --- 10.0.0.1 ping statistics --- 00:19:11.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.260 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1245411 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1245411 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1245411 ']' 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.260 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.260 [2024-10-15 12:59:31.050269] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:19:11.260 [2024-10-15 12:59:31.050319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.260 [2024-10-15 12:59:31.120331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.260 [2024-10-15 12:59:31.161113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.260 [2024-10-15 12:59:31.161144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.260 [2024-10-15 12:59:31.161151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.260 [2024-10-15 12:59:31.161158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.260 [2024-10-15 12:59:31.161163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.260 [2024-10-15 12:59:31.161622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.TDW 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.TDW 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.TDW 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.TDW 00:19:11.828 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.828 [2024-10-15 12:59:32.081770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.828 [2024-10-15 12:59:32.097775] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.828 [2024-10-15 12:59:32.097964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.828 malloc0 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1245624 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1245624 /var/tmp/bdevperf.sock 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1245624 ']' 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.087 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.087 [2024-10-15 12:59:32.225388] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:19:12.087 [2024-10-15 12:59:32.225435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245624 ] 00:19:12.087 [2024-10-15 12:59:32.292135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.087 [2024-10-15 12:59:32.333300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.346 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.346 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:12.346 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.TDW 00:19:12.346 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.606 [2024-10-15 12:59:32.768147] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.606 TLSTESTn1 00:19:12.606 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.866 Running I/O for 10 seconds... 00:19:14.736 5305.00 IOPS, 20.72 MiB/s [2024-10-15T10:59:35.992Z] 5462.00 IOPS, 21.34 MiB/s [2024-10-15T10:59:37.369Z] 5470.00 IOPS, 21.37 MiB/s [2024-10-15T10:59:38.305Z] 5520.25 IOPS, 21.56 MiB/s [2024-10-15T10:59:39.242Z] 5543.00 IOPS, 21.65 MiB/s [2024-10-15T10:59:40.179Z] 5559.00 IOPS, 21.71 MiB/s [2024-10-15T10:59:41.114Z] 5578.00 IOPS, 21.79 MiB/s [2024-10-15T10:59:42.050Z] 5583.75 IOPS, 21.81 MiB/s [2024-10-15T10:59:42.989Z] 5592.33 IOPS, 21.85 MiB/s [2024-10-15T10:59:42.989Z] 5595.80 IOPS, 21.86 MiB/s 00:19:22.670 Latency(us) 00:19:22.670 [2024-10-15T10:59:42.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.670 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.670 Verification LBA range: start 0x0 length 0x2000 00:19:22.670 TLSTESTn1 : 10.01 5601.56 21.88 0.00 0.00 22818.20 5055.63 56423.38 00:19:22.670 [2024-10-15T10:59:42.989Z] =================================================================================================================== 00:19:22.670 [2024-10-15T10:59:42.989Z] Total : 5601.56 21.88 0.00 0.00 22818.20 5055.63 56423.38 00:19:22.670 { 00:19:22.670 "results": [ 00:19:22.670 { 00:19:22.670 "job": "TLSTESTn1", 00:19:22.670 "core_mask": "0x4", 00:19:22.670 "workload": "verify", 00:19:22.670 "status": "finished", 00:19:22.670 "verify_range": { 00:19:22.670 "start": 0, 00:19:22.670 "length": 8192 00:19:22.670 }, 00:19:22.670 "queue_depth": 128, 00:19:22.670 "io_size": 4096, 00:19:22.670 "runtime": 10.012036, 00:19:22.670 "iops": 5601.557964833526, 00:19:22.670 "mibps": 21.88108580013096, 00:19:22.670 "io_failed": 0, 00:19:22.670 "io_timeout": 0, 00:19:22.670 "avg_latency_us": 22818.198290662735, 00:19:22.670 "min_latency_us": 5055.634285714285, 00:19:22.670 "max_latency_us": 56423.375238095236 00:19:22.670 } 00:19:22.670 ], 00:19:22.670 "core_count": 1 00:19:22.670 } 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:22.929 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:22.929 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:22.930 nvmf_trace.0 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1245624 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1245624 ']' 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1245624 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245624 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245624' 00:19:22.930 killing process with pid 1245624 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1245624 00:19:22.930 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.930 00:19:22.930 Latency(us) 00:19:22.930 [2024-10-15T10:59:43.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.930 [2024-10-15T10:59:43.249Z] =================================================================================================================== 00:19:22.930 [2024-10-15T10:59:43.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.930 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1245624 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:23.190 rmmod nvme_tcp 00:19:23.190 rmmod nvme_fabrics 00:19:23.190 rmmod nvme_keyring 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1245411 ']' 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1245411 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1245411 ']' 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1245411 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245411 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245411' 00:19:23.190 killing process with pid 1245411 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1245411 00:19:23.190 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1245411 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.356 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:25.356 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.TDW 00:19:25.356 00:19:25.356 real 0m21.048s 00:19:25.356 user 0m22.008s 00:19:25.356 sys 0m9.575s 00:19:25.356 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.356 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.356 ************************************ 00:19:25.356 END TEST nvmf_fips 00:19:25.356 ************************************ 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.616 ************************************ 00:19:25.616 START TEST nvmf_control_msg_list 00:19:25.616 ************************************ 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:25.616 * Looking for test storage... 00:19:25.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:25.616 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:25.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.617 --rc genhtml_branch_coverage=1 00:19:25.617 --rc genhtml_function_coverage=1 00:19:25.617 --rc genhtml_legend=1 00:19:25.617 --rc geninfo_all_blocks=1 00:19:25.617 --rc geninfo_unexecuted_blocks=1 00:19:25.617 00:19:25.617 ' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:25.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.617 --rc genhtml_branch_coverage=1 00:19:25.617 --rc genhtml_function_coverage=1 00:19:25.617 --rc genhtml_legend=1 00:19:25.617 --rc geninfo_all_blocks=1 00:19:25.617 --rc geninfo_unexecuted_blocks=1 00:19:25.617 00:19:25.617 ' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:25.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.617 --rc genhtml_branch_coverage=1 00:19:25.617 --rc genhtml_function_coverage=1 00:19:25.617 --rc genhtml_legend=1 00:19:25.617 --rc geninfo_all_blocks=1 00:19:25.617 --rc geninfo_unexecuted_blocks=1 00:19:25.617 00:19:25.617 ' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:25.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.617 --rc genhtml_branch_coverage=1 00:19:25.617 --rc genhtml_function_coverage=1 00:19:25.617 --rc genhtml_legend=1 00:19:25.617 --rc geninfo_all_blocks=1 00:19:25.617 --rc geninfo_unexecuted_blocks=1 00:19:25.617 00:19:25.617 ' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.617 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.877 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:25.877 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:25.877 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.877 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:32.451 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:32.451 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:32.451 Found net devices under 0000:86:00.0: cvl_0_0 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:32.451 Found net devices under 0000:86:00.1: cvl_0_1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.451 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:19:32.452 00:19:32.452 --- 10.0.0.2 ping statistics --- 00:19:32.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.452 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:19:32.452 00:19:32.452 --- 10.0.0.1 ping statistics --- 00:19:32.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.452 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1250990 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1250990 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1250990 ']' 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.452 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 [2024-10-15 12:59:51.960674] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:19:32.452 [2024-10-15 12:59:51.960715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.452 [2024-10-15 12:59:52.032839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.452 [2024-10-15 12:59:52.073470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.452 [2024-10-15 12:59:52.073505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.452 [2024-10-15 12:59:52.073512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.452 [2024-10-15 12:59:52.073518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.452 [2024-10-15 12:59:52.073523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.452 [2024-10-15 12:59:52.074109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 [2024-10-15 12:59:52.207471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 Malloc0 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.452 [2024-10-15 12:59:52.247744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1251012 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1251013 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1251014 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1251012 00:19:32.452 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:32.452 [2024-10-15 12:59:52.316138] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.452 [2024-10-15 12:59:52.326185] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.452 [2024-10-15 12:59:52.326336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.391 Initializing NVMe Controllers 00:19:33.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:33.391 Initialization complete. Launching workers. 00:19:33.391 ======================================================== 00:19:33.391 Latency(us) 00:19:33.391 Device Information : IOPS MiB/s Average min max 00:19:33.391 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3887.00 15.18 256.84 174.85 382.52 00:19:33.391 ======================================================== 00:19:33.391 Total : 3887.00 15.18 256.84 174.85 382.52 00:19:33.391 00:19:33.391 [2024-10-15 12:59:53.380014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222dff0 is same with the state(6) to be set 00:19:33.391 Initializing NVMe Controllers 00:19:33.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:33.391 Initialization complete. Launching workers. 00:19:33.391 ======================================================== 00:19:33.391 Latency(us) 00:19:33.391 Device Information : IOPS MiB/s Average min max 00:19:33.391 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 66.00 0.26 15639.96 194.71 41000.54 00:19:33.391 ======================================================== 00:19:33.391 Total : 66.00 0.26 15639.96 194.71 41000.54 00:19:33.391 00:19:33.391 [2024-10-15 12:59:53.422819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc650 is same with the state(6) to be set 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1251013 00:19:33.391 Initializing NVMe Controllers 00:19:33.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:33.391 Initialization complete. Launching workers. 00:19:33.391 ======================================================== 00:19:33.391 Latency(us) 00:19:33.391 Device Information : IOPS MiB/s Average min max 00:19:33.391 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40886.24 40529.98 41008.72 00:19:33.391 ======================================================== 00:19:33.391 Total : 25.00 0.10 40886.24 40529.98 41008.72 00:19:33.391 00:19:33.391 [2024-10-15 12:59:53.452905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22389a0 is same with the state(6) to be set 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1251014 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.391 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.391 rmmod nvme_tcp 00:19:33.391 rmmod nvme_fabrics 00:19:33.391 rmmod nvme_keyring 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1250990 ']' 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1250990 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1250990 ']' 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1250990 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250990 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250990' 00:19:33.392 killing process with pid 1250990 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1250990 00:19:33.392 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1250990 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.650 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:35.558 00:19:35.558 real 0m10.096s 00:19:35.558 user 0m6.627s 00:19:35.558 sys 0m5.293s 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:35.558 ************************************ 00:19:35.558 END TEST nvmf_control_msg_list 00:19:35.558 ************************************ 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.558 12:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.822 ************************************ 00:19:35.822 START TEST nvmf_wait_for_buf 00:19:35.822 ************************************ 00:19:35.822 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:35.822 * Looking for test storage... 00:19:35.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.822 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:35.822 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:35.822 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:35.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.822 --rc genhtml_branch_coverage=1 00:19:35.822 --rc genhtml_function_coverage=1 00:19:35.822 --rc genhtml_legend=1 00:19:35.822 --rc geninfo_all_blocks=1 00:19:35.822 --rc geninfo_unexecuted_blocks=1 00:19:35.822 00:19:35.822 ' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:35.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.822 --rc genhtml_branch_coverage=1 00:19:35.822 --rc genhtml_function_coverage=1 00:19:35.822 --rc genhtml_legend=1 00:19:35.822 --rc geninfo_all_blocks=1 00:19:35.822 --rc geninfo_unexecuted_blocks=1 00:19:35.822 00:19:35.822 ' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:35.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.822 --rc genhtml_branch_coverage=1 00:19:35.822 --rc genhtml_function_coverage=1 00:19:35.822 --rc genhtml_legend=1 00:19:35.822 --rc geninfo_all_blocks=1 00:19:35.822 --rc geninfo_unexecuted_blocks=1 00:19:35.822 00:19:35.822 ' 00:19:35.822 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:35.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.822 --rc genhtml_branch_coverage=1 00:19:35.822 --rc genhtml_function_coverage=1 00:19:35.823 --rc genhtml_legend=1 00:19:35.823 --rc geninfo_all_blocks=1 00:19:35.823 --rc geninfo_unexecuted_blocks=1 00:19:35.823 00:19:35.823 ' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.823 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:42.500 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:42.500 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:42.500 Found net devices under 0000:86:00.0: cvl_0_0 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:42.500 Found net devices under 0000:86:00.1: cvl_0_1 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.500 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.501 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.501 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.501 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.501 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.501 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:19:42.501 00:19:42.501 --- 10.0.0.2 ping statistics --- 00:19:42.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.501 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:42.501 00:19:42.501 --- 10.0.0.1 ping statistics --- 00:19:42.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.501 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1254903 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1254903 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1254903 ']' 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 [2024-10-15 13:00:02.129712] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:19:42.501 [2024-10-15 13:00:02.129761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.501 [2024-10-15 13:00:02.205304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.501 [2024-10-15 13:00:02.244947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.501 [2024-10-15 13:00:02.244983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.501 [2024-10-15 13:00:02.244992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.501 [2024-10-15 13:00:02.244997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.501 [2024-10-15 13:00:02.245002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.501 [2024-10-15 13:00:02.245548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 Malloc0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 [2024-10-15 13:00:02.431671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.501 [2024-10-15 13:00:02.455848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.501 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:42.501 [2024-10-15 13:00:02.534688] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:43.879 Initializing NVMe Controllers 00:19:43.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:43.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:43.879 Initialization complete. Launching workers. 00:19:43.879 ======================================================== 00:19:43.879 Latency(us) 00:19:43.879 Device Information : IOPS MiB/s Average min max 00:19:43.879 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 126.68 15.83 32683.91 7256.26 63870.54 00:19:43.879 ======================================================== 00:19:43.879 Total : 126.68 15.83 32683.91 7256.26 63870.54 00:19:43.879 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.879 rmmod nvme_tcp 00:19:43.879 rmmod nvme_fabrics 00:19:43.879 rmmod nvme_keyring 00:19:43.879 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1254903 ']' 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1254903 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1254903 ']' 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1254903 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254903 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254903' 00:19:43.879 killing process with pid 1254903 00:19:43.879 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1254903 00:19:43.880 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1254903 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:44.138 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:19:44.139 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.139 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:44.139 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.139 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.139 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:46.046 00:19:46.046 real 0m10.399s 00:19:46.046 user 0m3.925s 00:19:46.046 sys 0m4.922s 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.046 ************************************ 00:19:46.046 END TEST nvmf_wait_for_buf 00:19:46.046 ************************************ 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:46.046 13:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:52.690 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.691 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.691 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.691 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.691 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.691 ************************************ 00:19:52.691 START TEST nvmf_perf_adq 00:19:52.691 ************************************ 00:19:52.691 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:52.691 * Looking for test storage... 00:19:52.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.691 --rc genhtml_branch_coverage=1 00:19:52.691 --rc genhtml_function_coverage=1 00:19:52.691 --rc genhtml_legend=1 00:19:52.691 --rc geninfo_all_blocks=1 00:19:52.691 --rc geninfo_unexecuted_blocks=1 00:19:52.691 00:19:52.691 ' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.691 --rc genhtml_branch_coverage=1 00:19:52.691 --rc genhtml_function_coverage=1 00:19:52.691 --rc genhtml_legend=1 00:19:52.691 --rc geninfo_all_blocks=1 00:19:52.691 --rc geninfo_unexecuted_blocks=1 00:19:52.691 00:19:52.691 ' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.691 --rc genhtml_branch_coverage=1 00:19:52.691 --rc genhtml_function_coverage=1 00:19:52.691 --rc genhtml_legend=1 00:19:52.691 --rc geninfo_all_blocks=1 00:19:52.691 --rc geninfo_unexecuted_blocks=1 00:19:52.691 00:19:52.691 ' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.691 --rc genhtml_branch_coverage=1 00:19:52.691 --rc genhtml_function_coverage=1 00:19:52.691 --rc genhtml_legend=1 00:19:52.691 --rc geninfo_all_blocks=1 00:19:52.691 --rc geninfo_unexecuted_blocks=1 00:19:52.691 00:19:52.691 ' 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.691 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.692 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.967 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.967 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.968 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.968 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.968 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.968 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:57.968 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:58.906 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:00.811 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:06.087 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.087 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.087 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:06.087 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:06.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:20:06.087 00:20:06.087 --- 10.0.0.2 ping statistics --- 00:20:06.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.087 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:06.088 00:20:06.088 --- 10.0.0.1 ping statistics --- 00:20:06.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.088 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1263622 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1263622 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1263622 ']' 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.088 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.088 [2024-10-15 13:00:26.382673] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:06.088 [2024-10-15 13:00:26.382724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.346 [2024-10-15 13:00:26.457297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.346 [2024-10-15 13:00:26.501166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.346 [2024-10-15 13:00:26.501202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.346 [2024-10-15 13:00:26.501209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.346 [2024-10-15 13:00:26.501215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.346 [2024-10-15 13:00:26.501220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.346 [2024-10-15 13:00:26.502754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.346 [2024-10-15 13:00:26.502864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.346 [2024-10-15 13:00:26.502969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.346 [2024-10-15 13:00:26.502970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.346 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 [2024-10-15 13:00:26.703857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 Malloc1 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 [2024-10-15 13:00:26.762155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1263724 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:06.606 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:08.510 "tick_rate": 2100000000, 00:20:08.510 "poll_groups": [ 00:20:08.510 { 00:20:08.510 "name": "nvmf_tgt_poll_group_000", 00:20:08.510 "admin_qpairs": 1, 00:20:08.510 "io_qpairs": 1, 00:20:08.510 "current_admin_qpairs": 1, 00:20:08.510 "current_io_qpairs": 1, 00:20:08.510 "pending_bdev_io": 0, 00:20:08.510 "completed_nvme_io": 20065, 00:20:08.510 "transports": [ 00:20:08.510 { 00:20:08.510 "trtype": "TCP" 00:20:08.510 } 00:20:08.510 ] 00:20:08.510 }, 00:20:08.510 { 00:20:08.510 "name": "nvmf_tgt_poll_group_001", 00:20:08.510 "admin_qpairs": 0, 00:20:08.510 "io_qpairs": 1, 00:20:08.510 "current_admin_qpairs": 0, 00:20:08.510 "current_io_qpairs": 1, 00:20:08.510 "pending_bdev_io": 0, 00:20:08.510 "completed_nvme_io": 20633, 00:20:08.510 "transports": [ 00:20:08.510 { 00:20:08.510 "trtype": "TCP" 00:20:08.510 } 00:20:08.510 ] 00:20:08.510 }, 00:20:08.510 { 00:20:08.510 "name": "nvmf_tgt_poll_group_002", 00:20:08.510 "admin_qpairs": 0, 00:20:08.510 "io_qpairs": 1, 00:20:08.510 "current_admin_qpairs": 0, 00:20:08.510 "current_io_qpairs": 1, 00:20:08.510 "pending_bdev_io": 0, 00:20:08.510 "completed_nvme_io": 20219, 00:20:08.510 "transports": [ 00:20:08.510 { 00:20:08.510 "trtype": "TCP" 00:20:08.510 } 00:20:08.510 ] 00:20:08.510 }, 00:20:08.510 { 00:20:08.510 "name": "nvmf_tgt_poll_group_003", 00:20:08.510 "admin_qpairs": 0, 00:20:08.510 "io_qpairs": 1, 00:20:08.510 "current_admin_qpairs": 0, 00:20:08.510 "current_io_qpairs": 1, 00:20:08.510 "pending_bdev_io": 0, 00:20:08.510 "completed_nvme_io": 20046, 00:20:08.510 "transports": [ 00:20:08.510 { 00:20:08.510 "trtype": "TCP" 00:20:08.510 } 00:20:08.510 ] 00:20:08.510 } 00:20:08.510 ] 00:20:08.510 }' 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:08.510 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:08.769 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:08.769 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:08.769 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1263724 00:20:16.890 Initializing NVMe Controllers 00:20:16.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:16.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:16.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:16.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:16.890 Initialization complete. Launching workers. 00:20:16.890 ======================================================== 00:20:16.890 Latency(us) 00:20:16.890 Device Information : IOPS MiB/s Average min max 00:20:16.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10557.14 41.24 6061.58 2023.77 10041.64 00:20:16.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10835.73 42.33 5906.29 1567.04 10156.14 00:20:16.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10687.03 41.75 5988.63 2381.89 10121.09 00:20:16.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10615.84 41.47 6027.78 2102.19 10436.50 00:20:16.890 ======================================================== 00:20:16.890 Total : 42695.74 166.78 5995.51 1567.04 10436.50 00:20:16.890 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:16.890 rmmod nvme_tcp 00:20:16.890 rmmod nvme_fabrics 00:20:16.890 rmmod nvme_keyring 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1263622 ']' 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1263622 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1263622 ']' 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1263622 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.890 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1263622 00:20:16.890 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:16.890 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:16.890 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1263622' 00:20:16.890 killing process with pid 1263622 00:20:16.890 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1263622 00:20:16.890 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1263622 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.149 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.055 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:19.055 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:19.055 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:19.055 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:20.434 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:22.971 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.248 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.248 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.248 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.248 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:20:28.248 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:20:28.249 00:20:28.249 --- 10.0.0.2 ping statistics --- 00:20:28.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.249 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:28.249 00:20:28.249 --- 10.0.0.1 ping statistics --- 00:20:28.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.249 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:28.249 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:28.249 net.core.busy_poll = 1 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:28.249 net.core.busy_read = 1 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1267557 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1267557 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1267557 ']' 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 [2024-10-15 13:00:48.292732] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:28.249 [2024-10-15 13:00:48.292778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.249 [2024-10-15 13:00:48.365042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.249 [2024-10-15 13:00:48.407737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.249 [2024-10-15 13:00:48.407776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.249 [2024-10-15 13:00:48.407785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.249 [2024-10-15 13:00:48.407793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.249 [2024-10-15 13:00:48.407800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.249 [2024-10-15 13:00:48.409359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.249 [2024-10-15 13:00:48.409469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.249 [2024-10-15 13:00:48.409583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.249 [2024-10-15 13:00:48.409584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.249 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 [2024-10-15 13:00:48.631031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 Malloc1 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.509 [2024-10-15 13:00:48.688891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1267677 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:28.509 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:30.415 "tick_rate": 2100000000, 00:20:30.415 "poll_groups": [ 00:20:30.415 { 00:20:30.415 "name": "nvmf_tgt_poll_group_000", 00:20:30.415 "admin_qpairs": 1, 00:20:30.415 "io_qpairs": 2, 00:20:30.415 "current_admin_qpairs": 1, 00:20:30.415 "current_io_qpairs": 2, 00:20:30.415 "pending_bdev_io": 0, 00:20:30.415 "completed_nvme_io": 29049, 00:20:30.415 "transports": [ 00:20:30.415 { 00:20:30.415 "trtype": "TCP" 00:20:30.415 } 00:20:30.415 ] 00:20:30.415 }, 00:20:30.415 { 00:20:30.415 "name": "nvmf_tgt_poll_group_001", 00:20:30.415 "admin_qpairs": 0, 00:20:30.415 "io_qpairs": 2, 00:20:30.415 "current_admin_qpairs": 0, 00:20:30.415 "current_io_qpairs": 2, 00:20:30.415 "pending_bdev_io": 0, 00:20:30.415 "completed_nvme_io": 29299, 00:20:30.415 "transports": [ 00:20:30.415 { 00:20:30.415 "trtype": "TCP" 00:20:30.415 } 00:20:30.415 ] 00:20:30.415 }, 00:20:30.415 { 00:20:30.415 "name": "nvmf_tgt_poll_group_002", 00:20:30.415 "admin_qpairs": 0, 00:20:30.415 "io_qpairs": 0, 00:20:30.415 "current_admin_qpairs": 0, 00:20:30.415 "current_io_qpairs": 0, 00:20:30.415 "pending_bdev_io": 0, 00:20:30.415 "completed_nvme_io": 0, 00:20:30.415 "transports": [ 00:20:30.415 { 00:20:30.415 "trtype": "TCP" 00:20:30.415 } 00:20:30.415 ] 00:20:30.415 }, 00:20:30.415 { 00:20:30.415 "name": "nvmf_tgt_poll_group_003", 00:20:30.415 "admin_qpairs": 0, 00:20:30.415 "io_qpairs": 0, 00:20:30.415 "current_admin_qpairs": 0, 00:20:30.415 "current_io_qpairs": 0, 00:20:30.415 "pending_bdev_io": 0, 00:20:30.415 "completed_nvme_io": 0, 00:20:30.415 "transports": [ 00:20:30.415 { 00:20:30.415 "trtype": "TCP" 00:20:30.415 } 00:20:30.415 ] 00:20:30.415 } 00:20:30.415 ] 00:20:30.415 }' 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:30.415 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:30.674 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:30.674 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:30.674 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1267677 00:20:38.795 Initializing NVMe Controllers 00:20:38.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:38.795 Initialization complete. Launching workers. 00:20:38.795 ======================================================== 00:20:38.795 Latency(us) 00:20:38.795 Device Information : IOPS MiB/s Average min max 00:20:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6998.40 27.34 9175.37 1558.79 51693.39 00:20:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7989.50 31.21 8045.31 1456.34 52762.05 00:20:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7421.20 28.99 8662.31 1057.11 53037.57 00:20:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7430.40 29.02 8615.28 1205.70 52040.88 00:20:38.795 ======================================================== 00:20:38.795 Total : 29839.49 116.56 8605.73 1057.11 53037.57 00:20:38.795 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.795 rmmod nvme_tcp 00:20:38.795 rmmod nvme_fabrics 00:20:38.795 rmmod nvme_keyring 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1267557 ']' 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1267557 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1267557 ']' 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1267557 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1267557 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1267557' 00:20:38.795 killing process with pid 1267557 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1267557 00:20:38.795 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1267557 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.053 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:42.340 00:20:42.340 real 0m50.258s 00:20:42.340 user 2m43.907s 00:20:42.340 sys 0m10.170s 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.340 ************************************ 00:20:42.340 END TEST nvmf_perf_adq 00:20:42.340 ************************************ 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.340 ************************************ 00:20:42.340 START TEST nvmf_shutdown 00:20:42.340 ************************************ 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.340 * Looking for test storage... 00:20:42.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.340 --rc genhtml_branch_coverage=1 00:20:42.340 --rc genhtml_function_coverage=1 00:20:42.340 --rc genhtml_legend=1 00:20:42.340 --rc geninfo_all_blocks=1 00:20:42.340 --rc geninfo_unexecuted_blocks=1 00:20:42.340 00:20:42.340 ' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.340 --rc genhtml_branch_coverage=1 00:20:42.340 --rc genhtml_function_coverage=1 00:20:42.340 --rc genhtml_legend=1 00:20:42.340 --rc geninfo_all_blocks=1 00:20:42.340 --rc geninfo_unexecuted_blocks=1 00:20:42.340 00:20:42.340 ' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.340 --rc genhtml_branch_coverage=1 00:20:42.340 --rc genhtml_function_coverage=1 00:20:42.340 --rc genhtml_legend=1 00:20:42.340 --rc geninfo_all_blocks=1 00:20:42.340 --rc geninfo_unexecuted_blocks=1 00:20:42.340 00:20:42.340 ' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.340 --rc genhtml_branch_coverage=1 00:20:42.340 --rc genhtml_function_coverage=1 00:20:42.340 --rc genhtml_legend=1 00:20:42.340 --rc geninfo_all_blocks=1 00:20:42.340 --rc geninfo_unexecuted_blocks=1 00:20:42.340 00:20:42.340 ' 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.340 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:42.341 ************************************ 00:20:42.341 START TEST nvmf_shutdown_tc1 00:20:42.341 ************************************ 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.341 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:48.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.906 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:48.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:48.907 Found net devices under 0000:86:00.0: cvl_0_0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:48.907 Found net devices under 0000:86:00.1: cvl_0_1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:20:48.907 00:20:48.907 --- 10.0.0.2 ping statistics --- 00:20:48.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.907 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:48.907 00:20:48.907 --- 10.0.0.1 ping statistics --- 00:20:48.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.907 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1273119 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1273119 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1273119 ']' 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.907 [2024-10-15 13:01:08.651820] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:48.907 [2024-10-15 13:01:08.651871] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.907 [2024-10-15 13:01:08.724351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.907 [2024-10-15 13:01:08.767263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.907 [2024-10-15 13:01:08.767300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.907 [2024-10-15 13:01:08.767307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.907 [2024-10-15 13:01:08.767313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.907 [2024-10-15 13:01:08.767318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.907 [2024-10-15 13:01:08.768875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.907 [2024-10-15 13:01:08.768911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.907 [2024-10-15 13:01:08.769018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.907 [2024-10-15 13:01:08.769019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:48.907 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 [2024-10-15 13:01:08.906000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.908 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 Malloc1 00:20:48.908 [2024-10-15 13:01:09.015120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.908 Malloc2 00:20:48.908 Malloc3 00:20:48.908 Malloc4 00:20:48.908 Malloc5 00:20:48.908 Malloc6 00:20:49.167 Malloc7 00:20:49.167 Malloc8 00:20:49.167 Malloc9 00:20:49.167 Malloc10 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1273182 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1273182 /var/tmp/bdevperf.sock 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1273182 ']' 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.167 { 00:20:49.167 "params": { 00:20:49.167 "name": "Nvme$subsystem", 00:20:49.167 "trtype": "$TEST_TRANSPORT", 00:20:49.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.167 "adrfam": "ipv4", 00:20:49.167 "trsvcid": "$NVMF_PORT", 00:20:49.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.167 "hdgst": ${hdgst:-false}, 00:20:49.167 "ddgst": ${ddgst:-false} 00:20:49.167 }, 00:20:49.167 "method": "bdev_nvme_attach_controller" 00:20:49.167 } 00:20:49.167 EOF 00:20:49.167 )") 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.167 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.167 { 00:20:49.167 "params": { 00:20:49.167 "name": "Nvme$subsystem", 00:20:49.167 "trtype": "$TEST_TRANSPORT", 00:20:49.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.167 "adrfam": "ipv4", 00:20:49.167 "trsvcid": "$NVMF_PORT", 00:20:49.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.168 { 00:20:49.168 "params": { 00:20:49.168 "name": "Nvme$subsystem", 00:20:49.168 "trtype": "$TEST_TRANSPORT", 00:20:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.168 "adrfam": "ipv4", 00:20:49.168 "trsvcid": "$NVMF_PORT", 00:20:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.168 { 00:20:49.168 "params": { 00:20:49.168 "name": "Nvme$subsystem", 00:20:49.168 "trtype": "$TEST_TRANSPORT", 00:20:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.168 "adrfam": "ipv4", 00:20:49.168 "trsvcid": "$NVMF_PORT", 00:20:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.168 { 00:20:49.168 "params": { 00:20:49.168 "name": "Nvme$subsystem", 00:20:49.168 "trtype": "$TEST_TRANSPORT", 00:20:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.168 "adrfam": "ipv4", 00:20:49.168 "trsvcid": "$NVMF_PORT", 00:20:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.168 { 00:20:49.168 "params": { 00:20:49.168 "name": "Nvme$subsystem", 00:20:49.168 "trtype": "$TEST_TRANSPORT", 00:20:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.168 "adrfam": "ipv4", 00:20:49.168 "trsvcid": "$NVMF_PORT", 00:20:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.168 [2024-10-15 13:01:09.481976] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:49.168 [2024-10-15 13:01:09.482028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.168 { 00:20:49.168 "params": { 00:20:49.168 "name": "Nvme$subsystem", 00:20:49.168 "trtype": "$TEST_TRANSPORT", 00:20:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.168 "adrfam": "ipv4", 00:20:49.168 "trsvcid": "$NVMF_PORT", 00:20:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.168 "hdgst": ${hdgst:-false}, 00:20:49.168 "ddgst": ${ddgst:-false} 00:20:49.168 }, 00:20:49.168 "method": "bdev_nvme_attach_controller" 00:20:49.168 } 00:20:49.168 EOF 00:20:49.168 )") 00:20:49.168 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.427 { 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme$subsystem", 00:20:49.427 "trtype": "$TEST_TRANSPORT", 00:20:49.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "$NVMF_PORT", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.427 "hdgst": ${hdgst:-false}, 00:20:49.427 "ddgst": ${ddgst:-false} 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 } 00:20:49.427 EOF 00:20:49.427 )") 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.427 { 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme$subsystem", 00:20:49.427 "trtype": "$TEST_TRANSPORT", 00:20:49.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "$NVMF_PORT", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.427 "hdgst": ${hdgst:-false}, 00:20:49.427 "ddgst": ${ddgst:-false} 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 } 00:20:49.427 EOF 00:20:49.427 )") 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:49.427 { 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme$subsystem", 00:20:49.427 "trtype": "$TEST_TRANSPORT", 00:20:49.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "$NVMF_PORT", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.427 "hdgst": ${hdgst:-false}, 00:20:49.427 "ddgst": ${ddgst:-false} 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 } 00:20:49.427 EOF 00:20:49.427 )") 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:20:49.427 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme1", 00:20:49.427 "trtype": "tcp", 00:20:49.427 "traddr": "10.0.0.2", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "4420", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.427 "hdgst": false, 00:20:49.427 "ddgst": false 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 },{ 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme2", 00:20:49.427 "trtype": "tcp", 00:20:49.427 "traddr": "10.0.0.2", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "4420", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.427 "hdgst": false, 00:20:49.427 "ddgst": false 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 },{ 00:20:49.427 "params": { 00:20:49.427 "name": "Nvme3", 00:20:49.427 "trtype": "tcp", 00:20:49.427 "traddr": "10.0.0.2", 00:20:49.427 "adrfam": "ipv4", 00:20:49.427 "trsvcid": "4420", 00:20:49.427 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.427 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.427 "hdgst": false, 00:20:49.427 "ddgst": false 00:20:49.427 }, 00:20:49.427 "method": "bdev_nvme_attach_controller" 00:20:49.427 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme4", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme5", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme6", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme7", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme8", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme9", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 },{ 00:20:49.428 "params": { 00:20:49.428 "name": "Nvme10", 00:20:49.428 "trtype": "tcp", 00:20:49.428 "traddr": "10.0.0.2", 00:20:49.428 "adrfam": "ipv4", 00:20:49.428 "trsvcid": "4420", 00:20:49.428 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.428 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.428 "hdgst": false, 00:20:49.428 "ddgst": false 00:20:49.428 }, 00:20:49.428 "method": "bdev_nvme_attach_controller" 00:20:49.428 }' 00:20:49.428 [2024-10-15 13:01:09.554093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.428 [2024-10-15 13:01:09.595440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1273182 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:51.335 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:52.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1273182 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1273119 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.351 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.351 { 00:20:52.351 "params": { 00:20:52.351 "name": "Nvme$subsystem", 00:20:52.351 "trtype": "$TEST_TRANSPORT", 00:20:52.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.351 "adrfam": "ipv4", 00:20:52.351 "trsvcid": "$NVMF_PORT", 00:20:52.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.351 "hdgst": ${hdgst:-false}, 00:20:52.351 "ddgst": ${ddgst:-false} 00:20:52.351 }, 00:20:52.351 "method": "bdev_nvme_attach_controller" 00:20:52.351 } 00:20:52.351 EOF 00:20:52.351 )") 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.352 { 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme$subsystem", 00:20:52.352 "trtype": "$TEST_TRANSPORT", 00:20:52.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "$NVMF_PORT", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.352 "hdgst": ${hdgst:-false}, 00:20:52.352 "ddgst": ${ddgst:-false} 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 } 00:20:52.352 EOF 00:20:52.352 )") 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.352 [2024-10-15 13:01:12.424806] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:52.352 [2024-10-15 13:01:12.424857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273710 ] 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.352 { 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme$subsystem", 00:20:52.352 "trtype": "$TEST_TRANSPORT", 00:20:52.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "$NVMF_PORT", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.352 "hdgst": ${hdgst:-false}, 00:20:52.352 "ddgst": ${ddgst:-false} 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 } 00:20:52.352 EOF 00:20:52.352 )") 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.352 { 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme$subsystem", 00:20:52.352 "trtype": "$TEST_TRANSPORT", 00:20:52.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "$NVMF_PORT", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.352 "hdgst": ${hdgst:-false}, 00:20:52.352 "ddgst": ${ddgst:-false} 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 } 00:20:52.352 EOF 00:20:52.352 )") 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.352 { 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme$subsystem", 00:20:52.352 "trtype": "$TEST_TRANSPORT", 00:20:52.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "$NVMF_PORT", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.352 "hdgst": ${hdgst:-false}, 00:20:52.352 "ddgst": ${ddgst:-false} 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 } 00:20:52.352 EOF 00:20:52.352 )") 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:20:52.352 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme1", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme2", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme3", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme4", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme5", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme6", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme7", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme8", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:52.352 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:52.352 "hdgst": false, 00:20:52.352 "ddgst": false 00:20:52.352 }, 00:20:52.352 "method": "bdev_nvme_attach_controller" 00:20:52.352 },{ 00:20:52.352 "params": { 00:20:52.352 "name": "Nvme9", 00:20:52.352 "trtype": "tcp", 00:20:52.352 "traddr": "10.0.0.2", 00:20:52.352 "adrfam": "ipv4", 00:20:52.352 "trsvcid": "4420", 00:20:52.352 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:52.353 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:52.353 "hdgst": false, 00:20:52.353 "ddgst": false 00:20:52.353 }, 00:20:52.353 "method": "bdev_nvme_attach_controller" 00:20:52.353 },{ 00:20:52.353 "params": { 00:20:52.353 "name": "Nvme10", 00:20:52.353 "trtype": "tcp", 00:20:52.353 "traddr": "10.0.0.2", 00:20:52.353 "adrfam": "ipv4", 00:20:52.353 "trsvcid": "4420", 00:20:52.353 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:52.353 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:52.353 "hdgst": false, 00:20:52.353 "ddgst": false 00:20:52.353 }, 00:20:52.353 "method": "bdev_nvme_attach_controller" 00:20:52.353 }' 00:20:52.353 [2024-10-15 13:01:12.495031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.353 [2024-10-15 13:01:12.535761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.740 Running I/O for 1 seconds... 00:20:54.935 2259.00 IOPS, 141.19 MiB/s 00:20:54.935 Latency(us) 00:20:54.935 [2024-10-15T11:01:15.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.935 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme1n1 : 1.14 285.39 17.84 0.00 0.00 216902.33 16477.62 200727.41 00:20:54.935 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme2n1 : 1.16 274.72 17.17 0.00 0.00 226693.71 15166.90 220700.28 00:20:54.935 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme3n1 : 1.14 283.94 17.75 0.00 0.00 212898.47 14355.50 209715.20 00:20:54.935 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme4n1 : 1.15 277.69 17.36 0.00 0.00 219064.03 15104.49 213709.78 00:20:54.935 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme5n1 : 1.17 274.56 17.16 0.00 0.00 217350.34 15728.64 242670.45 00:20:54.935 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme6n1 : 1.17 273.18 17.07 0.00 0.00 216724.24 15791.06 229688.08 00:20:54.935 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.935 Nvme7n1 : 1.17 277.80 17.36 0.00 0.00 209292.07 5336.50 213709.78 00:20:54.935 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.935 Verification LBA range: start 0x0 length 0x400 00:20:54.936 Nvme8n1 : 1.17 272.50 17.03 0.00 0.00 210677.76 12295.80 228689.43 00:20:54.936 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.936 Verification LBA range: start 0x0 length 0x400 00:20:54.936 Nvme9n1 : 1.18 274.30 17.14 0.00 0.00 206705.27 1443.35 221698.93 00:20:54.936 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.936 Verification LBA range: start 0x0 length 0x400 00:20:54.936 Nvme10n1 : 1.18 270.81 16.93 0.00 0.00 206498.08 16227.96 237677.23 00:20:54.936 [2024-10-15T11:01:15.255Z] =================================================================================================================== 00:20:54.936 [2024-10-15T11:01:15.255Z] Total : 2764.88 172.81 0.00 0.00 214270.11 1443.35 242670.45 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.194 rmmod nvme_tcp 00:20:55.194 rmmod nvme_fabrics 00:20:55.194 rmmod nvme_keyring 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1273119 ']' 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1273119 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1273119 ']' 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1273119 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273119 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273119' 00:20:55.194 killing process with pid 1273119 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1273119 00:20:55.194 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1273119 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.762 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.667 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.667 00:20:57.667 real 0m15.324s 00:20:57.667 user 0m34.057s 00:20:57.667 sys 0m5.834s 00:20:57.667 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.667 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.667 ************************************ 00:20:57.668 END TEST nvmf_shutdown_tc1 00:20:57.668 ************************************ 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.668 ************************************ 00:20:57.668 START TEST nvmf_shutdown_tc2 00:20:57.668 ************************************ 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.668 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.668 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.668 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.669 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.669 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.669 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.669 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.930 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.930 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:20:57.930 00:20:57.930 --- 10.0.0.2 ping statistics --- 00:20:57.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.930 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:20:57.930 00:20:57.930 --- 10.0.0.1 ping statistics --- 00:20:57.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.930 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:57.930 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1274846 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1274846 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1274846 ']' 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.189 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.189 [2024-10-15 13:01:18.335394] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:58.189 [2024-10-15 13:01:18.335437] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.189 [2024-10-15 13:01:18.409514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.189 [2024-10-15 13:01:18.450429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.189 [2024-10-15 13:01:18.450467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.189 [2024-10-15 13:01:18.450474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.189 [2024-10-15 13:01:18.450480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.189 [2024-10-15 13:01:18.450484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.189 [2024-10-15 13:01:18.452152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.189 [2024-10-15 13:01:18.452260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.189 [2024-10-15 13:01:18.452364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:58.189 [2024-10-15 13:01:18.452370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.132 [2024-10-15 13:01:19.203520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.132 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.132 Malloc1 00:20:59.132 [2024-10-15 13:01:19.310425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.132 Malloc2 00:20:59.132 Malloc3 00:20:59.132 Malloc4 00:20:59.397 Malloc5 00:20:59.397 Malloc6 00:20:59.397 Malloc7 00:20:59.397 Malloc8 00:20:59.397 Malloc9 00:20:59.397 Malloc10 00:20:59.397 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.397 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.397 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.397 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1275148 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1275148 /var/tmp/bdevperf.sock 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1275148 ']' 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.656 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.656 { 00:20:59.656 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 [2024-10-15 13:01:19.783560] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:20:59.657 [2024-10-15 13:01:19.783617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275148 ] 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.657 { 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme$subsystem", 00:20:59.657 "trtype": "$TEST_TRANSPORT", 00:20:59.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "$NVMF_PORT", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.657 "hdgst": ${hdgst:-false}, 00:20:59.657 "ddgst": ${ddgst:-false} 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.657 } 00:20:59.657 EOF 00:20:59.657 )") 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:20:59.657 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:59.657 "params": { 00:20:59.657 "name": "Nvme1", 00:20:59.657 "trtype": "tcp", 00:20:59.657 "traddr": "10.0.0.2", 00:20:59.657 "adrfam": "ipv4", 00:20:59.657 "trsvcid": "4420", 00:20:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.657 "hdgst": false, 00:20:59.657 "ddgst": false 00:20:59.657 }, 00:20:59.657 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme2", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme3", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme4", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme5", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme6", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme7", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme8", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme9", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 },{ 00:20:59.658 "params": { 00:20:59.658 "name": "Nvme10", 00:20:59.658 "trtype": "tcp", 00:20:59.658 "traddr": "10.0.0.2", 00:20:59.658 "adrfam": "ipv4", 00:20:59.658 "trsvcid": "4420", 00:20:59.658 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.658 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.658 "hdgst": false, 00:20:59.658 "ddgst": false 00:20:59.658 }, 00:20:59.658 "method": "bdev_nvme_attach_controller" 00:20:59.658 }' 00:20:59.658 [2024-10-15 13:01:19.856417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.658 [2024-10-15 13:01:19.897551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.563 Running I/O for 10 seconds... 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:01.563 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:01.822 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1275148 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1275148 ']' 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1275148 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275148 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275148' 00:21:02.081 killing process with pid 1275148 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1275148 00:21:02.081 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1275148 00:21:02.340 Received shutdown signal, test time was about 0.962591 seconds 00:21:02.340 00:21:02.340 Latency(us) 00:21:02.340 [2024-10-15T11:01:22.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme1n1 : 0.95 268.99 16.81 0.00 0.00 235320.08 17725.93 216705.71 00:21:02.340 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme2n1 : 0.95 268.25 16.77 0.00 0.00 232037.18 16602.45 215707.06 00:21:02.340 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme3n1 : 0.96 333.53 20.85 0.00 0.00 183418.10 13294.45 214708.42 00:21:02.340 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme4n1 : 0.95 274.36 17.15 0.00 0.00 218764.03 3058.35 214708.42 00:21:02.340 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme5n1 : 0.93 282.69 17.67 0.00 0.00 206899.21 2652.65 212711.13 00:21:02.340 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme6n1 : 0.93 279.07 17.44 0.00 0.00 207024.86 2262.55 209715.20 00:21:02.340 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme7n1 : 0.94 272.91 17.06 0.00 0.00 208699.25 17101.78 217704.35 00:21:02.340 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme8n1 : 0.94 272.34 17.02 0.00 0.00 205355.52 13544.11 213709.78 00:21:02.340 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme9n1 : 0.96 266.63 16.66 0.00 0.00 206106.58 29459.99 215707.06 00:21:02.340 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.340 Verification LBA range: start 0x0 length 0x400 00:21:02.340 Nvme10n1 : 0.96 266.13 16.63 0.00 0.00 203149.17 18849.40 230686.72 00:21:02.340 [2024-10-15T11:01:22.659Z] =================================================================================================================== 00:21:02.340 [2024-10-15T11:01:22.659Z] Total : 2784.91 174.06 0.00 0.00 210011.83 2262.55 230686.72 00:21:02.340 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1274846 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.277 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.536 rmmod nvme_tcp 00:21:03.536 rmmod nvme_fabrics 00:21:03.536 rmmod nvme_keyring 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1274846 ']' 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1274846 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1274846 ']' 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1274846 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1274846 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1274846' 00:21:03.536 killing process with pid 1274846 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1274846 00:21:03.536 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1274846 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.795 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.331 00:21:06.331 real 0m8.215s 00:21:06.331 user 0m25.304s 00:21:06.331 sys 0m1.435s 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.331 ************************************ 00:21:06.331 END TEST nvmf_shutdown_tc2 00:21:06.331 ************************************ 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:06.331 ************************************ 00:21:06.331 START TEST nvmf_shutdown_tc3 00:21:06.331 ************************************ 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.331 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:06.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:06.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:06.332 Found net devices under 0000:86:00.0: cvl_0_0 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:06.332 Found net devices under 0000:86:00.1: cvl_0_1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:21:06.332 00:21:06.332 --- 10.0.0.2 ping statistics --- 00:21:06.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.332 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:21:06.332 00:21:06.332 --- 10.0.0.1 ping statistics --- 00:21:06.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.332 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:06.332 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1276266 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1276266 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1276266 ']' 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.333 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.333 [2024-10-15 13:01:26.616772] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:06.333 [2024-10-15 13:01:26.616815] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.592 [2024-10-15 13:01:26.690411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.592 [2024-10-15 13:01:26.731924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.592 [2024-10-15 13:01:26.731962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.592 [2024-10-15 13:01:26.731969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.592 [2024-10-15 13:01:26.731975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.592 [2024-10-15 13:01:26.731980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.592 [2024-10-15 13:01:26.733631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.592 [2024-10-15 13:01:26.733742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.592 [2024-10-15 13:01:26.733847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.592 [2024-10-15 13:01:26.733848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:07.160 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.160 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:07.160 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:07.160 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.160 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.419 [2024-10-15 13:01:27.491535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.419 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.420 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.420 Malloc1 00:21:07.420 [2024-10-15 13:01:27.604472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.420 Malloc2 00:21:07.420 Malloc3 00:21:07.420 Malloc4 00:21:07.678 Malloc5 00:21:07.678 Malloc6 00:21:07.678 Malloc7 00:21:07.678 Malloc8 00:21:07.678 Malloc9 00:21:07.678 Malloc10 00:21:07.678 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.678 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:07.678 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.678 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1276560 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1276560 /var/tmp/bdevperf.sock 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1276560 ']' 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 [2024-10-15 13:01:28.080958] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:07.938 [2024-10-15 13:01:28.081012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276560 ] 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.938 { 00:21:07.938 "params": { 00:21:07.938 "name": "Nvme$subsystem", 00:21:07.938 "trtype": "$TEST_TRANSPORT", 00:21:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.938 "adrfam": "ipv4", 00:21:07.938 "trsvcid": "$NVMF_PORT", 00:21:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.938 "hdgst": ${hdgst:-false}, 00:21:07.938 "ddgst": ${ddgst:-false} 00:21:07.938 }, 00:21:07.938 "method": "bdev_nvme_attach_controller" 00:21:07.938 } 00:21:07.938 EOF 00:21:07.938 )") 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.938 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.939 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.939 { 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme$subsystem", 00:21:07.939 "trtype": "$TEST_TRANSPORT", 00:21:07.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "$NVMF_PORT", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.939 "hdgst": ${hdgst:-false}, 00:21:07.939 "ddgst": ${ddgst:-false} 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 } 00:21:07.939 EOF 00:21:07.939 )") 00:21:07.939 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:07.939 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:21:07.939 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:21:07.939 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme1", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme2", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme3", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme4", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme5", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme6", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme7", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme8", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme9", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 },{ 00:21:07.939 "params": { 00:21:07.939 "name": "Nvme10", 00:21:07.939 "trtype": "tcp", 00:21:07.939 "traddr": "10.0.0.2", 00:21:07.939 "adrfam": "ipv4", 00:21:07.939 "trsvcid": "4420", 00:21:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:07.939 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:07.939 "hdgst": false, 00:21:07.939 "ddgst": false 00:21:07.939 }, 00:21:07.939 "method": "bdev_nvme_attach_controller" 00:21:07.939 }' 00:21:07.939 [2024-10-15 13:01:28.152120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.939 [2024-10-15 13:01:28.193741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.844 Running I/O for 10 seconds... 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.844 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.844 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.844 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:09.844 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:09.844 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1276266 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1276266 ']' 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1276266 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1276266 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1276266' 00:21:10.110 killing process with pid 1276266 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1276266 00:21:10.110 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1276266 00:21:10.110 [2024-10-15 13:01:30.394306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.110 [2024-10-15 13:01:30.394668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.394760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe030 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.395067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.111 [2024-10-15 13:01:30.395099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.395109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.111 [2024-10-15 13:01:30.395117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.395125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.111 [2024-10-15 13:01:30.395131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.395139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.111 [2024-10-15 13:01:30.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.395157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc10270 is same with the state(6) to be set 00:21:10.111 [2024-10-15 13:01:30.396584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.396985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.396994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.397023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.397038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.397052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.111 [2024-10-15 13:01:30.397066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.111 [2024-10-15 13:01:30.397073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t[2024-10-15 13:01:30.397220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:10.112 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 he state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t[2024-10-15 13:01:30.397307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1he state(6) to be set 00:21:10.112 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1[2024-10-15 13:01:30.397360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 he state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 he state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-10-15 13:01:30.397430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 he state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 he state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.112 [2024-10-15 13:01:30.397499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t[2024-10-15 13:01:30.397499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1he state(6) to be set 00:21:10.112 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.112 [2024-10-15 13:01:30.397507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.112 [2024-10-15 13:01:30.397508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 [2024-10-15 13:01:30.397514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 [2024-10-15 13:01:30.397521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 [2024-10-15 13:01:30.397529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1[2024-10-15 13:01:30.397536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 he state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 he state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 [2024-10-15 13:01:30.397560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 [2024-10-15 13:01:30.397568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 [2024-10-15 13:01:30.397574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 he state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 [2024-10-15 13:01:30.397597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t[2024-10-15 13:01:30.397599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:10.113 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 [2024-10-15 13:01:30.397611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 [2024-10-15 13:01:30.397618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 [2024-10-15 13:01:30.397625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1[2024-10-15 13:01:30.397632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.113 he state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-15 13:01:30.397640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.113 he state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with t[2024-10-15 13:01:30.397666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devihe state(6) to be set 00:21:10.113 ce or address) on qpair id 1 00:21:10.113 [2024-10-15 13:01:30.397675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe520 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.397720] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe15b50 was disconnected and freed. reset controller. 00:21:10.113 [2024-10-15 13:01:30.399223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:10.113 [2024-10-15 13:01:30.399273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053e80 (9): Bad file descriptor 00:21:10.113 [2024-10-15 13:01:30.400758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.113 [2024-10-15 13:01:30.400782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1053e80 with addr=10.0.0.2, port=4420 00:21:10.113 [2024-10-15 13:01:30.400792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e80 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053e80 (9): Bad file descriptor 00:21:10.113 [2024-10-15 13:01:30.401220] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.113 [2024-10-15 13:01:30.401216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.113 [2024-10-15 13:01:30.401494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:10.114 [2024-10-15 13:01:30.401630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] cont[2024-10-15 13:01:30.401636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe9f0 is same with troller reinitialization failed 00:21:10.114 he state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.401647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:10.114 [2024-10-15 13:01:30.401708] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.114 [2024-10-15 13:01:30.402024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.114 [2024-10-15 13:01:30.403673] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.114 [2024-10-15 13:01:30.404869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.404999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with t[2024-10-15 13:01:30.405153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc10270 (9): Bhe state(6) to be set 00:21:10.114 ad file descriptor 00:21:10.114 [2024-10-15 13:01:30.405168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.114 [2024-10-15 13:01:30.405190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with t[2024-10-15 13:01:30.405201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:10.115 id:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-15 13:01:30.405233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 he state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc056d0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfeee0 is same with t[2024-10-15 13:01:30.405299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:10.115 id:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e580 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.115 [2024-10-15 13:01:30.405430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.115 [2024-10-15 13:01:30.405436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0fe10 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.405999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.115 [2024-10-15 13:01:30.406211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.406299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff3b0 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff880 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.407651] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.116 [2024-10-15 13:01:30.408177] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.116 [2024-10-15 13:01:30.408707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.116 [2024-10-15 13:01:30.408776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408798] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:10.117 [2024-10-15 13:01:30.408805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.408997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.409003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.409008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.409014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.409019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.409025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.410114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:10.117 [2024-10-15 13:01:30.410381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.117 [2024-10-15 13:01:30.410398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1053e80 with addr=10.0.0.2, port=4420 00:21:10.117 [2024-10-15 13:01:30.410406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e80 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.410487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053e80 (9): Bad file descriptor 00:21:10.117 [2024-10-15 13:01:30.410569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:10.117 [2024-10-15 13:01:30.410578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:10.117 [2024-10-15 13:01:30.410585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:10.117 [2024-10-15 13:01:30.410678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.117 [2024-10-15 13:01:30.415187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ee50 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.415285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25610 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.415368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc056d0 (9): Bad file descriptor 00:21:10.117 [2024-10-15 13:01:30.415384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e580 (9): Bad file descriptor 00:21:10.117 [2024-10-15 13:01:30.415399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0fe10 (9): Bad file descriptor 00:21:10.117 [2024-10-15 13:01:30.415422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.117 [2024-10-15 13:01:30.415472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103af40 is same with the state(6) to be set 00:21:10.117 [2024-10-15 13:01:30.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.117 [2024-10-15 13:01:30.415577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.117 [2024-10-15 13:01:30.415596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.117 [2024-10-15 13:01:30.415622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.117 [2024-10-15 13:01:30.415630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.415989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.118 [2024-10-15 13:01:30.416162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.118 [2024-10-15 13:01:30.416171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.416516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.416523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139eb0 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffd50 is same with the state(6) to be set 00:21:10.119 [2024-10-15 13:01:30.417509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.119 [2024-10-15 13:01:30.417720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.119 [2024-10-15 13:01:30.417727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.417991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.417999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.418334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.418342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.120 [2024-10-15 13:01:30.424795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.120 [2024-10-15 13:01:30.424808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.121 [2024-10-15 13:01:30.424932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.121 [2024-10-15 13:01:30.424940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10300 is same with the state(6) to be set 00:21:10.121 [2024-10-15 13:01:30.424994] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d10300 was disconnected and freed. reset controller. 00:21:10.121 [2024-10-15 13:01:30.425009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.387 [2024-10-15 13:01:30.426031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:10.387 [2024-10-15 13:01:30.426051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25610 (9): Bad file descriptor 00:21:10.387 [2024-10-15 13:01:30.426293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.387 [2024-10-15 13:01:30.426307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc10270 with addr=10.0.0.2, port=4420 00:21:10.387 [2024-10-15 13:01:30.426315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc10270 is same with the state(6) to be set 00:21:10.387 [2024-10-15 13:01:30.426340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106dbe0 is same with the state(6) to be set 00:21:10.387 [2024-10-15 13:01:30.426424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106ee50 (9): Bad file descriptor 00:21:10.387 [2024-10-15 13:01:30.426450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.387 [2024-10-15 13:01:30.426502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051c90 is same with the state(6) to be set 00:21:10.387 [2024-10-15 13:01:30.426540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103af40 (9): Bad file descriptor 00:21:10.387 [2024-10-15 13:01:30.426824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:10.387 [2024-10-15 13:01:30.426859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc10270 (9): Bad file descriptor 00:21:10.387 [2024-10-15 13:01:30.426906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.426999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.387 [2024-10-15 13:01:30.427101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.387 [2024-10-15 13:01:30.427108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.388 [2024-10-15 13:01:30.427726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.388 [2024-10-15 13:01:30.427734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.427917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.427925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b100 is same with the state(6) to be set 00:21:10.389 [2024-10-15 13:01:30.428948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.428981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.428989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.428997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.389 [2024-10-15 13:01:30.429220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.389 [2024-10-15 13:01:30.429229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.390 [2024-10-15 13:01:30.429698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.390 [2024-10-15 13:01:30.429707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.429965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.429973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113c560 is same with the state(6) to be set 00:21:10.391 [2024-10-15 13:01:30.431002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.391 [2024-10-15 13:01:30.431188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.391 [2024-10-15 13:01:30.431195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.392 [2024-10-15 13:01:30.431531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.392 [2024-10-15 13:01:30.431538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.431991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.431998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.393 [2024-10-15 13:01:30.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.393 [2024-10-15 13:01:30.432014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.394 [2024-10-15 13:01:30.432021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d9c0 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.433257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:10.394 [2024-10-15 13:01:30.433277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:10.394 [2024-10-15 13:01:30.433286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:10.394 [2024-10-15 13:01:30.433567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.433583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb25610 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.433591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25610 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.433733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.433745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1053e80 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.433752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e80 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.433760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.433778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.433787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.394 [2024-10-15 13:01:30.433858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.434088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.434099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0fe10 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.434106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0fe10 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.434195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.434204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc056d0 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.434211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc056d0 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.434311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.434321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e580 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.434328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e580 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.434337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25610 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.434345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053e80 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.435019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0fe10 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.435034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc056d0 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.435042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e580 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.435050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.435134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.435140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.435242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.435247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.435268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.394 [2024-10-15 13:01:30.435385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.394 [2024-10-15 13:01:30.435398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc10270 with addr=10.0.0.2, port=4420 00:21:10.394 [2024-10-15 13:01:30.435405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc10270 is same with the state(6) to be set 00:21:10.394 [2024-10-15 13:01:30.435425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc10270 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.435445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.394 [2024-10-15 13:01:30.435451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.394 [2024-10-15 13:01:30.435458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.394 [2024-10-15 13:01:30.435479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.394 [2024-10-15 13:01:30.436052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106dbe0 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.436075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1051c90 (9): Bad file descriptor 00:21:10.394 [2024-10-15 13:01:30.436149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.394 [2024-10-15 13:01:30.436160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.394 [2024-10-15 13:01:30.436175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.394 [2024-10-15 13:01:30.436182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.394 [2024-10-15 13:01:30.436190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.394 [2024-10-15 13:01:30.436197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.394 [2024-10-15 13:01:30.436205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.394 [2024-10-15 13:01:30.436212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.394 [2024-10-15 13:01:30.436220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.395 [2024-10-15 13:01:30.436735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.395 [2024-10-15 13:01:30.436744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.436989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.436997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.437121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.437130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e9e0 is same with the state(6) to be set 00:21:10.396 [2024-10-15 13:01:30.438101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.396 [2024-10-15 13:01:30.438329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.396 [2024-10-15 13:01:30.438336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.397 [2024-10-15 13:01:30.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.397 [2024-10-15 13:01:30.438875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.438993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.438999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.439007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.439014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.439024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.439039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.398 [2024-10-15 13:01:30.439046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.398 [2024-10-15 13:01:30.439053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113ff10 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.440002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:10.398 task offset: 16384 on job bdev=Nvme10n1 fails 00:21:10.398 00:21:10.398 Latency(us) 00:21:10.398 [2024-10-15T11:01:30.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.398 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme1n1 ended in about 0.69 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme1n1 : 0.69 192.80 12.05 92.77 0.00 220627.89 15541.39 218702.99 00:21:10.398 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme2n1 ended in about 0.70 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme2n1 : 0.70 182.54 11.41 91.27 0.00 224863.25 23592.96 214708.42 00:21:10.398 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme3n1 ended in about 0.70 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme3n1 : 0.70 182.01 11.38 91.00 0.00 220323.84 14667.58 214708.42 00:21:10.398 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme4n1 ended in about 0.71 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme4n1 : 0.71 181.48 11.34 90.74 0.00 215848.15 14293.09 217704.35 00:21:10.398 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme5n1 ended in about 0.71 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme5n1 : 0.71 180.18 11.26 90.09 0.00 212374.35 16852.11 207717.91 00:21:10.398 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme6n1 ended in about 0.71 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme6n1 : 0.71 179.70 11.23 89.85 0.00 207896.71 17850.76 216705.71 00:21:10.398 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme7n1 ended in about 0.70 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme7n1 : 0.70 183.28 11.46 91.64 0.00 198089.71 22594.32 214708.42 00:21:10.398 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme8n1 : 0.68 290.00 18.12 0.00 0.00 180768.91 2668.25 206719.27 00:21:10.398 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme9n1 : 0.69 285.79 17.86 0.00 0.00 179708.60 1068.86 213709.78 00:21:10.398 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.398 Job: Nvme10n1 ended in about 0.67 seconds with error 00:21:10.398 Verification LBA range: start 0x0 length 0x400 00:21:10.398 Nvme10n1 : 0.67 190.57 11.91 95.28 0.00 173974.92 3557.67 237677.23 00:21:10.398 [2024-10-15T11:01:30.717Z] =================================================================================================================== 00:21:10.398 [2024-10-15T11:01:30.717Z] Total : 2048.33 128.02 732.65 0.00 203384.32 1068.86 237677.23 00:21:10.398 [2024-10-15 13:01:30.471562] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:10.398 [2024-10-15 13:01:30.471613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:10.398 [2024-10-15 13:01:30.472255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.472279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103af40 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.472290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103af40 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.472511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.472521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106ee50 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.472529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ee50 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.472587] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:10.398 [2024-10-15 13:01:30.472598] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:10.398 [2024-10-15 13:01:30.473061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103af40 (9): Bad file descriptor 00:21:10.398 [2024-10-15 13:01:30.473138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106ee50 (9): Bad file descriptor 00:21:10.398 [2024-10-15 13:01:30.473181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.473445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1053e80 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.473453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e80 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.473676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.473687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb25610 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.473695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25610 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.473702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:10.398 [2024-10-15 13:01:30.473708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:10.398 [2024-10-15 13:01:30.473716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:10.398 [2024-10-15 13:01:30.473725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:10.398 [2024-10-15 13:01:30.473732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:10.398 [2024-10-15 13:01:30.473743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:10.398 [2024-10-15 13:01:30.473775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:10.398 [2024-10-15 13:01:30.473796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.398 [2024-10-15 13:01:30.473804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.398 [2024-10-15 13:01:30.473885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.473896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e580 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.473903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e580 is same with the state(6) to be set 00:21:10.398 [2024-10-15 13:01:30.474124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.398 [2024-10-15 13:01:30.474134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc056d0 with addr=10.0.0.2, port=4420 00:21:10.398 [2024-10-15 13:01:30.474141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc056d0 is same with the state(6) to be set 00:21:10.399 [2024-10-15 13:01:30.474275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.399 [2024-10-15 13:01:30.474284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0fe10 with addr=10.0.0.2, port=4420 00:21:10.399 [2024-10-15 13:01:30.474291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0fe10 is same with the state(6) to be set 00:21:10.399 [2024-10-15 13:01:30.474512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.399 [2024-10-15 13:01:30.474521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc10270 with addr=10.0.0.2, port=4420 00:21:10.399 [2024-10-15 13:01:30.474529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc10270 is same with the state(6) to be set 00:21:10.399 [2024-10-15 13:01:30.474722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.399 [2024-10-15 13:01:30.474734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106dbe0 with addr=10.0.0.2, port=4420 00:21:10.399 [2024-10-15 13:01:30.474741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106dbe0 is same with the state(6) to be set 00:21:10.399 [2024-10-15 13:01:30.474750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053e80 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25610 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.399 [2024-10-15 13:01:30.474868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1051c90 with addr=10.0.0.2, port=4420 00:21:10.399 [2024-10-15 13:01:30.474874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051c90 is same with the state(6) to be set 00:21:10.399 [2024-10-15 13:01:30.474882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e580 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc056d0 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0fe10 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc10270 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106dbe0 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.474922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.474931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.474938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:10.399 [2024-10-15 13:01:30.474946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.474953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.474959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:10.399 [2024-10-15 13:01:30.474982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.474990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.474996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1051c90 (9): Bad file descriptor 00:21:10.399 [2024-10-15 13:01:30.475004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.475125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.475132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.475138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.475143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.399 [2024-10-15 13:01:30.475149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:10.399 [2024-10-15 13:01:30.475155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:10.399 [2024-10-15 13:01:30.475164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:10.399 [2024-10-15 13:01:30.475186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.658 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1276560 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1276560 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1276560 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:11.598 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.599 rmmod nvme_tcp 00:21:11.599 rmmod nvme_fabrics 00:21:11.599 rmmod nvme_keyring 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1276266 ']' 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1276266 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1276266 ']' 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1276266 00:21:11.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1276266) - No such process 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1276266 is not found' 00:21:11.599 Process with pid 1276266 is not found 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.599 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.137 00:21:14.137 real 0m7.710s 00:21:14.137 user 0m18.829s 00:21:14.137 sys 0m1.270s 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 ************************************ 00:21:14.137 END TEST nvmf_shutdown_tc3 00:21:14.137 ************************************ 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.137 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 ************************************ 00:21:14.137 START TEST nvmf_shutdown_tc4 00:21:14.137 ************************************ 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.137 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.138 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.138 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.138 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:14.138 00:21:14.138 --- 10.0.0.2 ping statistics --- 00:21:14.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.138 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:14.138 00:21:14.138 --- 10.0.0.1 ping statistics --- 00:21:14.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.138 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:14.138 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1277791 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1277791 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1277791 ']' 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.139 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.139 [2024-10-15 13:01:34.398744] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:14.139 [2024-10-15 13:01:34.398787] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.399 [2024-10-15 13:01:34.470148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.399 [2024-10-15 13:01:34.512003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.399 [2024-10-15 13:01:34.512041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.399 [2024-10-15 13:01:34.512048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.399 [2024-10-15 13:01:34.512054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.399 [2024-10-15 13:01:34.512059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.399 [2024-10-15 13:01:34.513658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.399 [2024-10-15 13:01:34.513766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.399 [2024-10-15 13:01:34.513870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.399 [2024-10-15 13:01:34.513871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.399 [2024-10-15 13:01:34.650001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.399 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.658 Malloc1 00:21:14.658 [2024-10-15 13:01:34.771113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.658 Malloc2 00:21:14.658 Malloc3 00:21:14.658 Malloc4 00:21:14.658 Malloc5 00:21:14.658 Malloc6 00:21:14.919 Malloc7 00:21:14.919 Malloc8 00:21:14.919 Malloc9 00:21:14.919 Malloc10 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1277892 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:14.919 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:15.178 [2024-10-15 13:01:35.261195] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1277791 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1277791 ']' 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1277791 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1277791 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1277791' 00:21:20.465 killing process with pid 1277791 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1277791 00:21:20.465 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1277791 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 [2024-10-15 13:01:40.270283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 starting I/O failed: -6 00:21:20.465 [2024-10-15 13:01:40.270338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 Write completed with error (sct=0, sc=8) 00:21:20.465 [2024-10-15 13:01:40.270353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec1c0 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.270388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.465 starting I/O failed: -6 00:21:20.465 starting I/O failed: -6 00:21:20.465 starting I/O failed: -6 00:21:20.465 NVMe io qpair process completion error 00:21:20.465 [2024-10-15 13:01:40.271160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.271187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.271195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.271201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.465 [2024-10-15 13:01:40.271207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.271255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ec690 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.272339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb820 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.273583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ea20 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.274211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea010 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.275877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed030 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.275903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed030 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.275911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed030 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.275917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed030 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed500 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed500 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed500 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed500 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed500 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.276981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed9d0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.277743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ecb60 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.282839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178db0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 [2024-10-15 13:01:40.283591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792a0 is same with the state(6) to be set 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 starting I/O failed: -6 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 starting I/O failed: -6 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 Write completed with error (sct=0, sc=8) 00:21:20.466 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 [2024-10-15 13:01:40.284263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.467 starting I/O failed: -6 00:21:20.467 starting I/O failed: -6 00:21:20.467 [2024-10-15 13:01:40.284430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179790 is same with the state(6) to be set 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 [2024-10-15 13:01:40.284909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with Write completed with error (sct=0, sc=8) 00:21:20.467 the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with the state(6) to be set 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 [2024-10-15 13:01:40.284938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with starting I/O failed: -6 00:21:20.467 the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with the state(6) to be set 00:21:20.467 [2024-10-15 13:01:40.284953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with the state(6) to be set 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 [2024-10-15 13:01:40.284959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21788e0 is same with starting I/O failed: -6 00:21:20.467 the state(6) to be set 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 [2024-10-15 13:01:40.285146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 [2024-10-15 13:01:40.286140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.467 starting I/O failed: -6 00:21:20.467 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.286954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with the state(6) to be set 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 [2024-10-15 13:01:40.286973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with starting I/O failed: -6 00:21:20.468 the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.286981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.286987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with the state(6) to be set 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 [2024-10-15 13:01:40.286994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with starting I/O failed: -6 00:21:20.468 the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23593f0 is same with the state(6) to be set 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.287278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with Write completed with error (sct=0, sc=8) 00:21:20.468 the state(6) to be set 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.287300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with Write completed with error (sct=0, sc=8) 00:21:20.468 the state(6) to be set 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.287319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179c80 is same with Write completed with error (sct=0, sc=8) 00:21:20.468 the state(6) to be set 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.287595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.287661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 [2024-10-15 13:01:40.287667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a170 is same with the state(6) to be set 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 [2024-10-15 13:01:40.287738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.468 NVMe io qpair process completion error 00:21:20.468 [2024-10-15 13:01:40.288911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d210 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.288929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d210 is same with the state(6) to be set 00:21:20.468 [2024-10-15 13:01:40.289096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d6e0 is same with the state(6) to be set 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.468 starting I/O failed: -6 00:21:20.468 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 [2024-10-15 13:01:40.290320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 starting I/O failed: -6 00:21:20.469 [2024-10-15 13:01:40.290331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 [2024-10-15 13:01:40.290338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 [2024-10-15 13:01:40.290344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 [2024-10-15 13:01:40.290350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 [2024-10-15 13:01:40.290356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 starting I/O failed: -6 00:21:20.469 [2024-10-15 13:01:40.290362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 [2024-10-15 13:01:40.290369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b000 is same with the state(6) to be set 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 [2024-10-15 13:01:40.290561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.469 starting I/O failed: -6 00:21:20.469 starting I/O failed: -6 00:21:20.469 starting I/O failed: -6 00:21:20.469 starting I/O failed: -6 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.469 starting I/O failed: -6 00:21:20.469 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 [2024-10-15 13:01:40.293835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.470 NVMe io qpair process completion error 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 [2024-10-15 13:01:40.295125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.470 starting I/O failed: -6 00:21:20.470 starting I/O failed: -6 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 [2024-10-15 13:01:40.296084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.470 Write completed with error (sct=0, sc=8) 00:21:20.470 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 [2024-10-15 13:01:40.297158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 [2024-10-15 13:01:40.299240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.471 NVMe io qpair process completion error 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 [2024-10-15 13:01:40.300253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 starting I/O failed: -6 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.471 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 [2024-10-15 13:01:40.301129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 [2024-10-15 13:01:40.302156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.472 starting I/O failed: -6 00:21:20.472 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 [2024-10-15 13:01:40.307697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.473 NVMe io qpair process completion error 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 [2024-10-15 13:01:40.311936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.473 NVMe io qpair process completion error 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 Write completed with error (sct=0, sc=8) 00:21:20.473 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 [2024-10-15 13:01:40.313688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 [2024-10-15 13:01:40.314702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.474 Write completed with error (sct=0, sc=8) 00:21:20.474 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 [2024-10-15 13:01:40.316520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.475 NVMe io qpair process completion error 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 [2024-10-15 13:01:40.317623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 [2024-10-15 13:01:40.318524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 Write completed with error (sct=0, sc=8) 00:21:20.475 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 [2024-10-15 13:01:40.319591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 [2024-10-15 13:01:40.321192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.476 NVMe io qpair process completion error 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 Write completed with error (sct=0, sc=8) 00:21:20.476 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 [2024-10-15 13:01:40.322225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 [2024-10-15 13:01:40.323148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 [2024-10-15 13:01:40.324227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.477 Write completed with error (sct=0, sc=8) 00:21:20.477 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 [2024-10-15 13:01:40.327347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.478 NVMe io qpair process completion error 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 [2024-10-15 13:01:40.328339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 starting I/O failed: -6 00:21:20.478 Write completed with error (sct=0, sc=8) 00:21:20.478 [2024-10-15 13:01:40.329197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 [2024-10-15 13:01:40.330221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 Write completed with error (sct=0, sc=8) 00:21:20.479 starting I/O failed: -6 00:21:20.479 [2024-10-15 13:01:40.334195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.479 NVMe io qpair process completion error 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 [2024-10-15 13:01:40.335138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 [2024-10-15 13:01:40.336042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 [2024-10-15 13:01:40.337051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.480 starting I/O failed: -6 00:21:20.480 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 Write completed with error (sct=0, sc=8) 00:21:20.481 starting I/O failed: -6 00:21:20.481 [2024-10-15 13:01:40.339502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.481 NVMe io qpair process completion error 00:21:20.481 Initializing NVMe Controllers 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:20.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:20.481 Initialization complete. Launching workers. 00:21:20.481 ======================================================== 00:21:20.481 Latency(us) 00:21:20.481 Device Information : IOPS MiB/s Average min max 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2191.26 94.16 58420.50 911.46 113262.86 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2178.44 93.60 58779.41 871.29 115606.94 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2208.35 94.89 58010.65 884.82 119169.15 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2229.28 95.79 57506.96 701.36 112519.11 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2222.66 95.50 57007.86 686.73 95499.55 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2194.68 94.30 57762.54 407.96 109665.54 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2201.72 94.61 57596.21 431.02 110987.93 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2186.56 93.95 58011.71 916.22 111072.81 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2107.10 90.54 60197.79 613.36 110204.29 00:21:20.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2153.24 92.52 58975.40 1048.36 111257.03 00:21:20.481 ======================================================== 00:21:20.481 Total : 21873.28 939.87 58214.20 407.96 119169.15 00:21:20.481 00:21:20.481 [2024-10-15 13:01:40.342451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e904c0 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89960 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89fc0 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e90190 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89630 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8fe60 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8b9d0 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89c90 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8b7f0 is same with the state(6) to be set 00:21:20.481 [2024-10-15 13:01:40.342725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bbb0 is same with the state(6) to be set 00:21:20.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:20.481 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1277892 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1277892 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1277892 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.420 rmmod nvme_tcp 00:21:21.420 rmmod nvme_fabrics 00:21:21.420 rmmod nvme_keyring 00:21:21.420 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1277791 ']' 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1277791 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1277791 ']' 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1277791 00:21:21.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1277791) - No such process 00:21:21.679 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1277791 is not found' 00:21:21.679 Process with pid 1277791 is not found 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.680 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.585 00:21:23.585 real 0m9.795s 00:21:23.585 user 0m24.886s 00:21:23.585 sys 0m5.238s 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.585 ************************************ 00:21:23.585 END TEST nvmf_shutdown_tc4 00:21:23.585 ************************************ 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:23.585 00:21:23.585 real 0m41.550s 00:21:23.585 user 1m43.300s 00:21:23.585 sys 0m14.094s 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:23.585 ************************************ 00:21:23.585 END TEST nvmf_shutdown 00:21:23.585 ************************************ 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:23.585 00:21:23.585 real 11m39.228s 00:21:23.585 user 25m10.680s 00:21:23.585 sys 3m35.294s 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.585 13:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.585 ************************************ 00:21:23.585 END TEST nvmf_target_extra 00:21:23.585 ************************************ 00:21:23.845 13:01:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:23.845 13:01:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:23.845 13:01:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:23.845 13:01:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.845 ************************************ 00:21:23.845 START TEST nvmf_host 00:21:23.845 ************************************ 00:21:23.845 13:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:23.845 * Looking for test storage... 00:21:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.845 --rc genhtml_branch_coverage=1 00:21:23.845 --rc genhtml_function_coverage=1 00:21:23.845 --rc genhtml_legend=1 00:21:23.845 --rc geninfo_all_blocks=1 00:21:23.845 --rc geninfo_unexecuted_blocks=1 00:21:23.845 00:21:23.845 ' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.845 --rc genhtml_branch_coverage=1 00:21:23.845 --rc genhtml_function_coverage=1 00:21:23.845 --rc genhtml_legend=1 00:21:23.845 --rc geninfo_all_blocks=1 00:21:23.845 --rc geninfo_unexecuted_blocks=1 00:21:23.845 00:21:23.845 ' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.845 --rc genhtml_branch_coverage=1 00:21:23.845 --rc genhtml_function_coverage=1 00:21:23.845 --rc genhtml_legend=1 00:21:23.845 --rc geninfo_all_blocks=1 00:21:23.845 --rc geninfo_unexecuted_blocks=1 00:21:23.845 00:21:23.845 ' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:23.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.845 --rc genhtml_branch_coverage=1 00:21:23.845 --rc genhtml_function_coverage=1 00:21:23.845 --rc genhtml_legend=1 00:21:23.845 --rc geninfo_all_blocks=1 00:21:23.845 --rc geninfo_unexecuted_blocks=1 00:21:23.845 00:21:23.845 ' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.845 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.105 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.106 ************************************ 00:21:24.106 START TEST nvmf_multicontroller 00:21:24.106 ************************************ 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:24.106 * Looking for test storage... 00:21:24.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:24.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.106 --rc genhtml_branch_coverage=1 00:21:24.106 --rc genhtml_function_coverage=1 00:21:24.106 --rc genhtml_legend=1 00:21:24.106 --rc geninfo_all_blocks=1 00:21:24.106 --rc geninfo_unexecuted_blocks=1 00:21:24.106 00:21:24.106 ' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:24.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.106 --rc genhtml_branch_coverage=1 00:21:24.106 --rc genhtml_function_coverage=1 00:21:24.106 --rc genhtml_legend=1 00:21:24.106 --rc geninfo_all_blocks=1 00:21:24.106 --rc geninfo_unexecuted_blocks=1 00:21:24.106 00:21:24.106 ' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:24.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.106 --rc genhtml_branch_coverage=1 00:21:24.106 --rc genhtml_function_coverage=1 00:21:24.106 --rc genhtml_legend=1 00:21:24.106 --rc geninfo_all_blocks=1 00:21:24.106 --rc geninfo_unexecuted_blocks=1 00:21:24.106 00:21:24.106 ' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:24.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.106 --rc genhtml_branch_coverage=1 00:21:24.106 --rc genhtml_function_coverage=1 00:21:24.106 --rc genhtml_legend=1 00:21:24.106 --rc geninfo_all_blocks=1 00:21:24.106 --rc geninfo_unexecuted_blocks=1 00:21:24.106 00:21:24.106 ' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.106 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:24.107 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:24.365 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.365 13:01:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.935 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.936 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.936 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.936 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:21:30.936 00:21:30.936 --- 10.0.0.2 ping statistics --- 00:21:30.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.936 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:21:30.936 00:21:30.936 --- 10.0.0.1 ping statistics --- 00:21:30.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.936 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1282590 00:21:30.936 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1282590 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1282590 ']' 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 [2024-10-15 13:01:50.393766] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:30.937 [2024-10-15 13:01:50.393810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.937 [2024-10-15 13:01:50.450611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:30.937 [2024-10-15 13:01:50.493673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.937 [2024-10-15 13:01:50.493705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.937 [2024-10-15 13:01:50.493712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.937 [2024-10-15 13:01:50.493718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.937 [2024-10-15 13:01:50.493723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.937 [2024-10-15 13:01:50.495084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.937 [2024-10-15 13:01:50.495191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.937 [2024-10-15 13:01:50.495191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 [2024-10-15 13:01:50.638962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 Malloc0 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 [2024-10-15 13:01:50.708584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 [2024-10-15 13:01:50.716506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 Malloc1 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1282612 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1282612 /var/tmp/bdevperf.sock 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1282612 ']' 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.937 13:01:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 NVMe0n1 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.937 1 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.937 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 request: 00:21:30.938 { 00:21:30.938 "name": "NVMe0", 00:21:30.938 "trtype": "tcp", 00:21:30.938 "traddr": "10.0.0.2", 00:21:30.938 "adrfam": "ipv4", 00:21:30.938 "trsvcid": "4420", 00:21:30.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.197 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:31.197 "hostaddr": "10.0.0.1", 00:21:31.197 "prchk_reftag": false, 00:21:31.197 "prchk_guard": false, 00:21:31.197 "hdgst": false, 00:21:31.197 "ddgst": false, 00:21:31.197 "allow_unrecognized_csi": false, 00:21:31.197 "method": "bdev_nvme_attach_controller", 00:21:31.197 "req_id": 1 00:21:31.197 } 00:21:31.197 Got JSON-RPC error response 00:21:31.197 response: 00:21:31.197 { 00:21:31.197 "code": -114, 00:21:31.197 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.197 } 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 request: 00:21:31.197 { 00:21:31.197 "name": "NVMe0", 00:21:31.197 "trtype": "tcp", 00:21:31.197 "traddr": "10.0.0.2", 00:21:31.197 "adrfam": "ipv4", 00:21:31.197 "trsvcid": "4420", 00:21:31.197 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.197 "hostaddr": "10.0.0.1", 00:21:31.197 "prchk_reftag": false, 00:21:31.197 "prchk_guard": false, 00:21:31.197 "hdgst": false, 00:21:31.197 "ddgst": false, 00:21:31.197 "allow_unrecognized_csi": false, 00:21:31.197 "method": "bdev_nvme_attach_controller", 00:21:31.197 "req_id": 1 00:21:31.197 } 00:21:31.197 Got JSON-RPC error response 00:21:31.197 response: 00:21:31.197 { 00:21:31.197 "code": -114, 00:21:31.197 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.197 } 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 request: 00:21:31.197 { 00:21:31.197 "name": "NVMe0", 00:21:31.197 "trtype": "tcp", 00:21:31.197 "traddr": "10.0.0.2", 00:21:31.197 "adrfam": "ipv4", 00:21:31.197 "trsvcid": "4420", 00:21:31.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.197 "hostaddr": "10.0.0.1", 00:21:31.197 "prchk_reftag": false, 00:21:31.197 "prchk_guard": false, 00:21:31.197 "hdgst": false, 00:21:31.197 "ddgst": false, 00:21:31.197 "multipath": "disable", 00:21:31.197 "allow_unrecognized_csi": false, 00:21:31.197 "method": "bdev_nvme_attach_controller", 00:21:31.197 "req_id": 1 00:21:31.197 } 00:21:31.197 Got JSON-RPC error response 00:21:31.197 response: 00:21:31.197 { 00:21:31.197 "code": -114, 00:21:31.197 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:31.197 } 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:31.197 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.198 request: 00:21:31.198 { 00:21:31.198 "name": "NVMe0", 00:21:31.198 "trtype": "tcp", 00:21:31.198 "traddr": "10.0.0.2", 00:21:31.198 "adrfam": "ipv4", 00:21:31.198 "trsvcid": "4420", 00:21:31.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.198 "hostaddr": "10.0.0.1", 00:21:31.198 "prchk_reftag": false, 00:21:31.198 "prchk_guard": false, 00:21:31.198 "hdgst": false, 00:21:31.198 "ddgst": false, 00:21:31.198 "multipath": "failover", 00:21:31.198 "allow_unrecognized_csi": false, 00:21:31.198 "method": "bdev_nvme_attach_controller", 00:21:31.198 "req_id": 1 00:21:31.198 } 00:21:31.198 Got JSON-RPC error response 00:21:31.198 response: 00:21:31.198 { 00:21:31.198 "code": -114, 00:21:31.198 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.198 } 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.198 NVMe0n1 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.198 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.456 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.457 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:31.457 13:01:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.834 { 00:21:32.834 "results": [ 00:21:32.834 { 00:21:32.834 "job": "NVMe0n1", 00:21:32.834 "core_mask": "0x1", 00:21:32.834 "workload": "write", 00:21:32.834 "status": "finished", 00:21:32.834 "queue_depth": 128, 00:21:32.834 "io_size": 4096, 00:21:32.834 "runtime": 1.004395, 00:21:32.834 "iops": 24924.457011434744, 00:21:32.834 "mibps": 97.36116020091697, 00:21:32.834 "io_failed": 0, 00:21:32.834 "io_timeout": 0, 00:21:32.834 "avg_latency_us": 5129.170214375116, 00:21:32.834 "min_latency_us": 1458.9561904761904, 00:21:32.834 "max_latency_us": 8925.379047619048 00:21:32.834 } 00:21:32.834 ], 00:21:32.834 "core_count": 1 00:21:32.834 } 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1282612 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1282612 ']' 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1282612 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1282612 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1282612' 00:21:32.834 killing process with pid 1282612 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1282612 00:21:32.834 13:01:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1282612 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:32.834 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:32.834 [2024-10-15 13:01:50.817263] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:32.834 [2024-10-15 13:01:50.817314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282612 ] 00:21:32.834 [2024-10-15 13:01:50.885104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.834 [2024-10-15 13:01:50.925838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.834 [2024-10-15 13:01:51.632849] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 10c145b8-22f7-4618-9366-86aa9bf91dd3 already exists 00:21:32.834 [2024-10-15 13:01:51.632876] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:10c145b8-22f7-4618-9366-86aa9bf91dd3 alias for bdev NVMe1n1 00:21:32.834 [2024-10-15 13:01:51.632885] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:32.834 Running I/O for 1 seconds... 00:21:32.834 24906.00 IOPS, 97.29 MiB/s 00:21:32.834 Latency(us) 00:21:32.834 [2024-10-15T11:01:53.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.834 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:32.834 NVMe0n1 : 1.00 24924.46 97.36 0.00 0.00 5129.17 1458.96 8925.38 00:21:32.834 [2024-10-15T11:01:53.153Z] =================================================================================================================== 00:21:32.834 [2024-10-15T11:01:53.153Z] Total : 24924.46 97.36 0.00 0.00 5129.17 1458.96 8925.38 00:21:32.834 Received shutdown signal, test time was about 1.000000 seconds 00:21:32.834 00:21:32.834 Latency(us) 00:21:32.834 [2024-10-15T11:01:53.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.834 [2024-10-15T11:01:53.153Z] =================================================================================================================== 00:21:32.834 [2024-10-15T11:01:53.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.834 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.834 rmmod nvme_tcp 00:21:32.834 rmmod nvme_fabrics 00:21:32.834 rmmod nvme_keyring 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1282590 ']' 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1282590 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1282590 ']' 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1282590 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.834 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1282590 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1282590' 00:21:33.093 killing process with pid 1282590 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1282590 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1282590 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.093 13:01:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.630 00:21:35.630 real 0m11.240s 00:21:35.630 user 0m12.705s 00:21:35.630 sys 0m5.114s 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.630 ************************************ 00:21:35.630 END TEST nvmf_multicontroller 00:21:35.630 ************************************ 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.630 ************************************ 00:21:35.630 START TEST nvmf_aer 00:21:35.630 ************************************ 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:35.630 * Looking for test storage... 00:21:35.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.630 --rc genhtml_branch_coverage=1 00:21:35.630 --rc genhtml_function_coverage=1 00:21:35.630 --rc genhtml_legend=1 00:21:35.630 --rc geninfo_all_blocks=1 00:21:35.630 --rc geninfo_unexecuted_blocks=1 00:21:35.630 00:21:35.630 ' 00:21:35.630 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.630 --rc genhtml_branch_coverage=1 00:21:35.630 --rc genhtml_function_coverage=1 00:21:35.630 --rc genhtml_legend=1 00:21:35.630 --rc geninfo_all_blocks=1 00:21:35.630 --rc geninfo_unexecuted_blocks=1 00:21:35.630 00:21:35.630 ' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.631 --rc genhtml_branch_coverage=1 00:21:35.631 --rc genhtml_function_coverage=1 00:21:35.631 --rc genhtml_legend=1 00:21:35.631 --rc geninfo_all_blocks=1 00:21:35.631 --rc geninfo_unexecuted_blocks=1 00:21:35.631 00:21:35.631 ' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.631 --rc genhtml_branch_coverage=1 00:21:35.631 --rc genhtml_function_coverage=1 00:21:35.631 --rc genhtml_legend=1 00:21:35.631 --rc geninfo_all_blocks=1 00:21:35.631 --rc geninfo_unexecuted_blocks=1 00:21:35.631 00:21:35.631 ' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.631 13:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.202 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.203 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.203 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.203 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:21:42.203 00:21:42.203 --- 10.0.0.2 ping statistics --- 00:21:42.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.203 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:42.203 00:21:42.203 --- 10.0.0.1 ping statistics --- 00:21:42.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.203 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1286605 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1286605 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1286605 ']' 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 [2024-10-15 13:02:01.723563] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:42.203 [2024-10-15 13:02:01.723625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.203 [2024-10-15 13:02:01.792859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.203 [2024-10-15 13:02:01.835907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.203 [2024-10-15 13:02:01.835946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.203 [2024-10-15 13:02:01.835954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.203 [2024-10-15 13:02:01.835960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.203 [2024-10-15 13:02:01.835966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.203 [2024-10-15 13:02:01.837372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.203 [2024-10-15 13:02:01.837485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.203 [2024-10-15 13:02:01.837589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.203 [2024-10-15 13:02:01.837591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 [2024-10-15 13:02:01.982620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.203 13:02:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 Malloc0 00:21:42.203 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.203 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:42.203 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.203 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 [2024-10-15 13:02:02.039993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 [ 00:21:42.204 { 00:21:42.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:42.204 "subtype": "Discovery", 00:21:42.204 "listen_addresses": [], 00:21:42.204 "allow_any_host": true, 00:21:42.204 "hosts": [] 00:21:42.204 }, 00:21:42.204 { 00:21:42.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.204 "subtype": "NVMe", 00:21:42.204 "listen_addresses": [ 00:21:42.204 { 00:21:42.204 "trtype": "TCP", 00:21:42.204 "adrfam": "IPv4", 00:21:42.204 "traddr": "10.0.0.2", 00:21:42.204 "trsvcid": "4420" 00:21:42.204 } 00:21:42.204 ], 00:21:42.204 "allow_any_host": true, 00:21:42.204 "hosts": [], 00:21:42.204 "serial_number": "SPDK00000000000001", 00:21:42.204 "model_number": "SPDK bdev Controller", 00:21:42.204 "max_namespaces": 2, 00:21:42.204 "min_cntlid": 1, 00:21:42.204 "max_cntlid": 65519, 00:21:42.204 "namespaces": [ 00:21:42.204 { 00:21:42.204 "nsid": 1, 00:21:42.204 "bdev_name": "Malloc0", 00:21:42.204 "name": "Malloc0", 00:21:42.204 "nguid": "83BD28E4A4444DB8A06B85DC5E569B95", 00:21:42.204 "uuid": "83bd28e4-a444-4db8-a06b-85dc5e569b95" 00:21:42.204 } 00:21:42.204 ] 00:21:42.204 } 00:21:42.204 ] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1286630 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 Malloc1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 Asynchronous Event Request test 00:21:42.204 Attaching to 10.0.0.2 00:21:42.204 Attached to 10.0.0.2 00:21:42.204 Registering asynchronous event callbacks... 00:21:42.204 Starting namespace attribute notice tests for all controllers... 00:21:42.204 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:42.204 aer_cb - Changed Namespace 00:21:42.204 Cleaning up... 00:21:42.204 [ 00:21:42.204 { 00:21:42.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:42.204 "subtype": "Discovery", 00:21:42.204 "listen_addresses": [], 00:21:42.204 "allow_any_host": true, 00:21:42.204 "hosts": [] 00:21:42.204 }, 00:21:42.204 { 00:21:42.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.204 "subtype": "NVMe", 00:21:42.204 "listen_addresses": [ 00:21:42.204 { 00:21:42.204 "trtype": "TCP", 00:21:42.204 "adrfam": "IPv4", 00:21:42.204 "traddr": "10.0.0.2", 00:21:42.204 "trsvcid": "4420" 00:21:42.204 } 00:21:42.204 ], 00:21:42.204 "allow_any_host": true, 00:21:42.204 "hosts": [], 00:21:42.204 "serial_number": "SPDK00000000000001", 00:21:42.204 "model_number": "SPDK bdev Controller", 00:21:42.204 "max_namespaces": 2, 00:21:42.204 "min_cntlid": 1, 00:21:42.204 "max_cntlid": 65519, 00:21:42.204 "namespaces": [ 00:21:42.204 { 00:21:42.204 "nsid": 1, 00:21:42.204 "bdev_name": "Malloc0", 00:21:42.204 "name": "Malloc0", 00:21:42.204 "nguid": "83BD28E4A4444DB8A06B85DC5E569B95", 00:21:42.204 "uuid": "83bd28e4-a444-4db8-a06b-85dc5e569b95" 00:21:42.204 }, 00:21:42.204 { 00:21:42.204 "nsid": 2, 00:21:42.204 "bdev_name": "Malloc1", 00:21:42.204 "name": "Malloc1", 00:21:42.204 "nguid": "E3A13211AF144D4BB6E25D4845743ACA", 00:21:42.204 "uuid": "e3a13211-af14-4d4b-b6e2-5d4845743aca" 00:21:42.204 } 00:21:42.204 ] 00:21:42.204 } 00:21:42.204 ] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1286630 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.204 rmmod nvme_tcp 00:21:42.204 rmmod nvme_fabrics 00:21:42.204 rmmod nvme_keyring 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1286605 ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1286605 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1286605 ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1286605 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1286605 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1286605' 00:21:42.204 killing process with pid 1286605 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1286605 00:21:42.204 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1286605 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.464 13:02:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.073 00:21:45.073 real 0m9.216s 00:21:45.073 user 0m5.051s 00:21:45.073 sys 0m4.877s 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.073 ************************************ 00:21:45.073 END TEST nvmf_aer 00:21:45.073 ************************************ 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.073 ************************************ 00:21:45.073 START TEST nvmf_async_init 00:21:45.073 ************************************ 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:45.073 * Looking for test storage... 00:21:45.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:45.073 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:45.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.074 --rc genhtml_branch_coverage=1 00:21:45.074 --rc genhtml_function_coverage=1 00:21:45.074 --rc genhtml_legend=1 00:21:45.074 --rc geninfo_all_blocks=1 00:21:45.074 --rc geninfo_unexecuted_blocks=1 00:21:45.074 00:21:45.074 ' 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:45.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.074 --rc genhtml_branch_coverage=1 00:21:45.074 --rc genhtml_function_coverage=1 00:21:45.074 --rc genhtml_legend=1 00:21:45.074 --rc geninfo_all_blocks=1 00:21:45.074 --rc geninfo_unexecuted_blocks=1 00:21:45.074 00:21:45.074 ' 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:45.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.074 --rc genhtml_branch_coverage=1 00:21:45.074 --rc genhtml_function_coverage=1 00:21:45.074 --rc genhtml_legend=1 00:21:45.074 --rc geninfo_all_blocks=1 00:21:45.074 --rc geninfo_unexecuted_blocks=1 00:21:45.074 00:21:45.074 ' 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:45.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.074 --rc genhtml_branch_coverage=1 00:21:45.074 --rc genhtml_function_coverage=1 00:21:45.074 --rc genhtml_legend=1 00:21:45.074 --rc geninfo_all_blocks=1 00:21:45.074 --rc geninfo_unexecuted_blocks=1 00:21:45.074 00:21:45.074 ' 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.074 13:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=773c950ab6e341dd9e73e827a3bfac79 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.074 13:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.411 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.411 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.412 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.412 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:21:50.671 00:21:50.671 --- 10.0.0.2 ping statistics --- 00:21:50.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.671 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:50.671 00:21:50.671 --- 10.0.0.1 ping statistics --- 00:21:50.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.671 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1290165 00:21:50.671 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1290165 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1290165 ']' 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.672 13:02:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.931 [2024-10-15 13:02:11.024224] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:21:50.931 [2024-10-15 13:02:11.024274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.931 [2024-10-15 13:02:11.097410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.931 [2024-10-15 13:02:11.137348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.931 [2024-10-15 13:02:11.137382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.931 [2024-10-15 13:02:11.137391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.931 [2024-10-15 13:02:11.137397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.931 [2024-10-15 13:02:11.137402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.931 [2024-10-15 13:02:11.137987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.931 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.931 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:21:50.931 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:50.931 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.931 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 [2024-10-15 13:02:11.285092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 null0 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 773c950ab6e341dd9e73e827a3bfac79 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.191 [2024-10-15 13:02:11.329343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.191 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.191 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:51.191 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.191 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.449 nvme0n1 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 [ 00:21:51.450 { 00:21:51.450 "name": "nvme0n1", 00:21:51.450 "aliases": [ 00:21:51.450 "773c950a-b6e3-41dd-9e73-e827a3bfac79" 00:21:51.450 ], 00:21:51.450 "product_name": "NVMe disk", 00:21:51.450 "block_size": 512, 00:21:51.450 "num_blocks": 2097152, 00:21:51.450 "uuid": "773c950a-b6e3-41dd-9e73-e827a3bfac79", 00:21:51.450 "numa_id": 1, 00:21:51.450 "assigned_rate_limits": { 00:21:51.450 "rw_ios_per_sec": 0, 00:21:51.450 "rw_mbytes_per_sec": 0, 00:21:51.450 "r_mbytes_per_sec": 0, 00:21:51.450 "w_mbytes_per_sec": 0 00:21:51.450 }, 00:21:51.450 "claimed": false, 00:21:51.450 "zoned": false, 00:21:51.450 "supported_io_types": { 00:21:51.450 "read": true, 00:21:51.450 "write": true, 00:21:51.450 "unmap": false, 00:21:51.450 "flush": true, 00:21:51.450 "reset": true, 00:21:51.450 "nvme_admin": true, 00:21:51.450 "nvme_io": true, 00:21:51.450 "nvme_io_md": false, 00:21:51.450 "write_zeroes": true, 00:21:51.450 "zcopy": false, 00:21:51.450 "get_zone_info": false, 00:21:51.450 "zone_management": false, 00:21:51.450 "zone_append": false, 00:21:51.450 "compare": true, 00:21:51.450 "compare_and_write": true, 00:21:51.450 "abort": true, 00:21:51.450 "seek_hole": false, 00:21:51.450 "seek_data": false, 00:21:51.450 "copy": true, 00:21:51.450 "nvme_iov_md": false 00:21:51.450 }, 00:21:51.450 "memory_domains": [ 00:21:51.450 { 00:21:51.450 "dma_device_id": "system", 00:21:51.450 "dma_device_type": 1 00:21:51.450 } 00:21:51.450 ], 00:21:51.450 "driver_specific": { 00:21:51.450 "nvme": [ 00:21:51.450 { 00:21:51.450 "trid": { 00:21:51.450 "trtype": "TCP", 00:21:51.450 "adrfam": "IPv4", 00:21:51.450 "traddr": "10.0.0.2", 00:21:51.450 "trsvcid": "4420", 00:21:51.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.450 }, 00:21:51.450 "ctrlr_data": { 00:21:51.450 "cntlid": 1, 00:21:51.450 "vendor_id": "0x8086", 00:21:51.450 "model_number": "SPDK bdev Controller", 00:21:51.450 "serial_number": "00000000000000000000", 00:21:51.450 "firmware_revision": "25.01", 00:21:51.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.450 "oacs": { 00:21:51.450 "security": 0, 00:21:51.450 "format": 0, 00:21:51.450 "firmware": 0, 00:21:51.450 "ns_manage": 0 00:21:51.450 }, 00:21:51.450 "multi_ctrlr": true, 00:21:51.450 "ana_reporting": false 00:21:51.450 }, 00:21:51.450 "vs": { 00:21:51.450 "nvme_version": "1.3" 00:21:51.450 }, 00:21:51.450 "ns_data": { 00:21:51.450 "id": 1, 00:21:51.450 "can_share": true 00:21:51.450 } 00:21:51.450 } 00:21:51.450 ], 00:21:51.450 "mp_policy": "active_passive" 00:21:51.450 } 00:21:51.450 } 00:21:51.450 ] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 [2024-10-15 13:02:11.591055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.450 [2024-10-15 13:02:11.591114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1c060 (9): Bad file descriptor 00:21:51.450 [2024-10-15 13:02:11.722688] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 [ 00:21:51.450 { 00:21:51.450 "name": "nvme0n1", 00:21:51.450 "aliases": [ 00:21:51.450 "773c950a-b6e3-41dd-9e73-e827a3bfac79" 00:21:51.450 ], 00:21:51.450 "product_name": "NVMe disk", 00:21:51.450 "block_size": 512, 00:21:51.450 "num_blocks": 2097152, 00:21:51.450 "uuid": "773c950a-b6e3-41dd-9e73-e827a3bfac79", 00:21:51.450 "numa_id": 1, 00:21:51.450 "assigned_rate_limits": { 00:21:51.450 "rw_ios_per_sec": 0, 00:21:51.450 "rw_mbytes_per_sec": 0, 00:21:51.450 "r_mbytes_per_sec": 0, 00:21:51.450 "w_mbytes_per_sec": 0 00:21:51.450 }, 00:21:51.450 "claimed": false, 00:21:51.450 "zoned": false, 00:21:51.450 "supported_io_types": { 00:21:51.450 "read": true, 00:21:51.450 "write": true, 00:21:51.450 "unmap": false, 00:21:51.450 "flush": true, 00:21:51.450 "reset": true, 00:21:51.450 "nvme_admin": true, 00:21:51.450 "nvme_io": true, 00:21:51.450 "nvme_io_md": false, 00:21:51.450 "write_zeroes": true, 00:21:51.450 "zcopy": false, 00:21:51.450 "get_zone_info": false, 00:21:51.450 "zone_management": false, 00:21:51.450 "zone_append": false, 00:21:51.450 "compare": true, 00:21:51.450 "compare_and_write": true, 00:21:51.450 "abort": true, 00:21:51.450 "seek_hole": false, 00:21:51.450 "seek_data": false, 00:21:51.450 "copy": true, 00:21:51.450 "nvme_iov_md": false 00:21:51.450 }, 00:21:51.450 "memory_domains": [ 00:21:51.450 { 00:21:51.450 "dma_device_id": "system", 00:21:51.450 "dma_device_type": 1 00:21:51.450 } 00:21:51.450 ], 00:21:51.450 "driver_specific": { 00:21:51.450 "nvme": [ 00:21:51.450 { 00:21:51.450 "trid": { 00:21:51.450 "trtype": "TCP", 00:21:51.450 "adrfam": "IPv4", 00:21:51.450 "traddr": "10.0.0.2", 00:21:51.450 "trsvcid": "4420", 00:21:51.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.450 }, 00:21:51.450 "ctrlr_data": { 00:21:51.450 "cntlid": 2, 00:21:51.450 "vendor_id": "0x8086", 00:21:51.450 "model_number": "SPDK bdev Controller", 00:21:51.450 "serial_number": "00000000000000000000", 00:21:51.450 "firmware_revision": "25.01", 00:21:51.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.450 "oacs": { 00:21:51.450 "security": 0, 00:21:51.450 "format": 0, 00:21:51.450 "firmware": 0, 00:21:51.450 "ns_manage": 0 00:21:51.450 }, 00:21:51.450 "multi_ctrlr": true, 00:21:51.450 "ana_reporting": false 00:21:51.450 }, 00:21:51.450 "vs": { 00:21:51.450 "nvme_version": "1.3" 00:21:51.450 }, 00:21:51.450 "ns_data": { 00:21:51.450 "id": 1, 00:21:51.450 "can_share": true 00:21:51.450 } 00:21:51.450 } 00:21:51.450 ], 00:21:51.450 "mp_policy": "active_passive" 00:21:51.450 } 00:21:51.450 } 00:21:51.450 ] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JVSCcl3KVm 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JVSCcl3KVm 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.JVSCcl3KVm 00:21:51.450 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 [2024-10-15 13:02:11.791678] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.711 [2024-10-15 13:02:11.792008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 [2024-10-15 13:02:11.811737] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.711 nvme0n1 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 [ 00:21:51.711 { 00:21:51.711 "name": "nvme0n1", 00:21:51.711 "aliases": [ 00:21:51.711 "773c950a-b6e3-41dd-9e73-e827a3bfac79" 00:21:51.711 ], 00:21:51.711 "product_name": "NVMe disk", 00:21:51.711 "block_size": 512, 00:21:51.711 "num_blocks": 2097152, 00:21:51.711 "uuid": "773c950a-b6e3-41dd-9e73-e827a3bfac79", 00:21:51.711 "numa_id": 1, 00:21:51.711 "assigned_rate_limits": { 00:21:51.711 "rw_ios_per_sec": 0, 00:21:51.711 "rw_mbytes_per_sec": 0, 00:21:51.711 "r_mbytes_per_sec": 0, 00:21:51.711 "w_mbytes_per_sec": 0 00:21:51.711 }, 00:21:51.711 "claimed": false, 00:21:51.711 "zoned": false, 00:21:51.711 "supported_io_types": { 00:21:51.711 "read": true, 00:21:51.711 "write": true, 00:21:51.711 "unmap": false, 00:21:51.711 "flush": true, 00:21:51.711 "reset": true, 00:21:51.711 "nvme_admin": true, 00:21:51.711 "nvme_io": true, 00:21:51.711 "nvme_io_md": false, 00:21:51.711 "write_zeroes": true, 00:21:51.711 "zcopy": false, 00:21:51.711 "get_zone_info": false, 00:21:51.711 "zone_management": false, 00:21:51.711 "zone_append": false, 00:21:51.711 "compare": true, 00:21:51.711 "compare_and_write": true, 00:21:51.711 "abort": true, 00:21:51.711 "seek_hole": false, 00:21:51.711 "seek_data": false, 00:21:51.711 "copy": true, 00:21:51.711 "nvme_iov_md": false 00:21:51.711 }, 00:21:51.711 "memory_domains": [ 00:21:51.711 { 00:21:51.711 "dma_device_id": "system", 00:21:51.711 "dma_device_type": 1 00:21:51.711 } 00:21:51.711 ], 00:21:51.711 "driver_specific": { 00:21:51.711 "nvme": [ 00:21:51.711 { 00:21:51.711 "trid": { 00:21:51.711 "trtype": "TCP", 00:21:51.711 "adrfam": "IPv4", 00:21:51.711 "traddr": "10.0.0.2", 00:21:51.711 "trsvcid": "4421", 00:21:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.711 }, 00:21:51.711 "ctrlr_data": { 00:21:51.711 "cntlid": 3, 00:21:51.711 "vendor_id": "0x8086", 00:21:51.711 "model_number": "SPDK bdev Controller", 00:21:51.711 "serial_number": "00000000000000000000", 00:21:51.711 "firmware_revision": "25.01", 00:21:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.711 "oacs": { 00:21:51.711 "security": 0, 00:21:51.711 "format": 0, 00:21:51.711 "firmware": 0, 00:21:51.711 "ns_manage": 0 00:21:51.711 }, 00:21:51.711 "multi_ctrlr": true, 00:21:51.711 "ana_reporting": false 00:21:51.711 }, 00:21:51.711 "vs": { 00:21:51.711 "nvme_version": "1.3" 00:21:51.711 }, 00:21:51.711 "ns_data": { 00:21:51.711 "id": 1, 00:21:51.711 "can_share": true 00:21:51.711 } 00:21:51.711 } 00:21:51.711 ], 00:21:51.711 "mp_policy": "active_passive" 00:21:51.711 } 00:21:51.711 } 00:21:51.711 ] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.JVSCcl3KVm 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.711 rmmod nvme_tcp 00:21:51.711 rmmod nvme_fabrics 00:21:51.711 rmmod nvme_keyring 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1290165 ']' 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1290165 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1290165 ']' 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1290165 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.711 13:02:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1290165 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1290165' 00:21:51.971 killing process with pid 1290165 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1290165 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1290165 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.971 13:02:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.509 00:21:54.509 real 0m9.435s 00:21:54.509 user 0m3.093s 00:21:54.509 sys 0m4.768s 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:54.509 ************************************ 00:21:54.509 END TEST nvmf_async_init 00:21:54.509 ************************************ 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.509 ************************************ 00:21:54.509 START TEST dma 00:21:54.509 ************************************ 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:54.509 * Looking for test storage... 00:21:54.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.509 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:54.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.509 --rc genhtml_branch_coverage=1 00:21:54.509 --rc genhtml_function_coverage=1 00:21:54.509 --rc genhtml_legend=1 00:21:54.509 --rc geninfo_all_blocks=1 00:21:54.509 --rc geninfo_unexecuted_blocks=1 00:21:54.509 00:21:54.509 ' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:54.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.510 --rc genhtml_branch_coverage=1 00:21:54.510 --rc genhtml_function_coverage=1 00:21:54.510 --rc genhtml_legend=1 00:21:54.510 --rc geninfo_all_blocks=1 00:21:54.510 --rc geninfo_unexecuted_blocks=1 00:21:54.510 00:21:54.510 ' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:54.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.510 --rc genhtml_branch_coverage=1 00:21:54.510 --rc genhtml_function_coverage=1 00:21:54.510 --rc genhtml_legend=1 00:21:54.510 --rc geninfo_all_blocks=1 00:21:54.510 --rc geninfo_unexecuted_blocks=1 00:21:54.510 00:21:54.510 ' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:54.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.510 --rc genhtml_branch_coverage=1 00:21:54.510 --rc genhtml_function_coverage=1 00:21:54.510 --rc genhtml_legend=1 00:21:54.510 --rc geninfo_all_blocks=1 00:21:54.510 --rc geninfo_unexecuted_blocks=1 00:21:54.510 00:21:54.510 ' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:54.510 00:21:54.510 real 0m0.198s 00:21:54.510 user 0m0.131s 00:21:54.510 sys 0m0.080s 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:54.510 ************************************ 00:21:54.510 END TEST dma 00:21:54.510 ************************************ 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.510 ************************************ 00:21:54.510 START TEST nvmf_identify 00:21:54.510 ************************************ 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:54.510 * Looking for test storage... 00:21:54.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:54.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.510 --rc genhtml_branch_coverage=1 00:21:54.510 --rc genhtml_function_coverage=1 00:21:54.510 --rc genhtml_legend=1 00:21:54.510 --rc geninfo_all_blocks=1 00:21:54.510 --rc geninfo_unexecuted_blocks=1 00:21:54.510 00:21:54.510 ' 00:21:54.510 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:54.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.510 --rc genhtml_branch_coverage=1 00:21:54.510 --rc genhtml_function_coverage=1 00:21:54.510 --rc genhtml_legend=1 00:21:54.510 --rc geninfo_all_blocks=1 00:21:54.510 --rc geninfo_unexecuted_blocks=1 00:21:54.511 00:21:54.511 ' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.511 --rc genhtml_branch_coverage=1 00:21:54.511 --rc genhtml_function_coverage=1 00:21:54.511 --rc genhtml_legend=1 00:21:54.511 --rc geninfo_all_blocks=1 00:21:54.511 --rc geninfo_unexecuted_blocks=1 00:21:54.511 00:21:54.511 ' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.511 --rc genhtml_branch_coverage=1 00:21:54.511 --rc genhtml_function_coverage=1 00:21:54.511 --rc genhtml_legend=1 00:21:54.511 --rc geninfo_all_blocks=1 00:21:54.511 --rc geninfo_unexecuted_blocks=1 00:21:54.511 00:21:54.511 ' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.511 13:02:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.084 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.084 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.084 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.084 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.085 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:22:01.085 00:22:01.085 --- 10.0.0.2 ping statistics --- 00:22:01.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.085 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:01.085 00:22:01.085 --- 10.0.0.1 ping statistics --- 00:22:01.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.085 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1293988 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1293988 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1293988 ']' 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.085 13:02:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 [2024-10-15 13:02:20.798024] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:01.085 [2024-10-15 13:02:20.798073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.085 [2024-10-15 13:02:20.870007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.085 [2024-10-15 13:02:20.914285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.085 [2024-10-15 13:02:20.914321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.085 [2024-10-15 13:02:20.914329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.085 [2024-10-15 13:02:20.914335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.085 [2024-10-15 13:02:20.914340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.085 [2024-10-15 13:02:20.915873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.085 [2024-10-15 13:02:20.915965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.085 [2024-10-15 13:02:20.916071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.085 [2024-10-15 13:02:20.916072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 [2024-10-15 13:02:21.013003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 Malloc0 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 [2024-10-15 13:02:21.109941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 [ 00:22:01.085 { 00:22:01.085 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:01.085 "subtype": "Discovery", 00:22:01.085 "listen_addresses": [ 00:22:01.085 { 00:22:01.085 "trtype": "TCP", 00:22:01.085 "adrfam": "IPv4", 00:22:01.085 "traddr": "10.0.0.2", 00:22:01.085 "trsvcid": "4420" 00:22:01.085 } 00:22:01.085 ], 00:22:01.085 "allow_any_host": true, 00:22:01.085 "hosts": [] 00:22:01.085 }, 00:22:01.085 { 00:22:01.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.085 "subtype": "NVMe", 00:22:01.085 "listen_addresses": [ 00:22:01.086 { 00:22:01.086 "trtype": "TCP", 00:22:01.086 "adrfam": "IPv4", 00:22:01.086 "traddr": "10.0.0.2", 00:22:01.086 "trsvcid": "4420" 00:22:01.086 } 00:22:01.086 ], 00:22:01.086 "allow_any_host": true, 00:22:01.086 "hosts": [], 00:22:01.086 "serial_number": "SPDK00000000000001", 00:22:01.086 "model_number": "SPDK bdev Controller", 00:22:01.086 "max_namespaces": 32, 00:22:01.086 "min_cntlid": 1, 00:22:01.086 "max_cntlid": 65519, 00:22:01.086 "namespaces": [ 00:22:01.086 { 00:22:01.086 "nsid": 1, 00:22:01.086 "bdev_name": "Malloc0", 00:22:01.086 "name": "Malloc0", 00:22:01.086 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:01.086 "eui64": "ABCDEF0123456789", 00:22:01.086 "uuid": "f211670a-54a8-4adf-800b-ca4217449a17" 00:22:01.086 } 00:22:01.086 ] 00:22:01.086 } 00:22:01.086 ] 00:22:01.086 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.086 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:01.086 [2024-10-15 13:02:21.162989] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:01.086 [2024-10-15 13:02:21.163023] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294010 ] 00:22:01.086 [2024-10-15 13:02:21.192924] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:01.086 [2024-10-15 13:02:21.192966] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.086 [2024-10-15 13:02:21.192971] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.086 [2024-10-15 13:02:21.192982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.086 [2024-10-15 13:02:21.192990] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.086 [2024-10-15 13:02:21.193581] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:01.086 [2024-10-15 13:02:21.193618] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14f7760 0 00:22:01.086 [2024-10-15 13:02:21.199613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.086 [2024-10-15 13:02:21.199629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.086 [2024-10-15 13:02:21.199634] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.086 [2024-10-15 13:02:21.199643] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.086 [2024-10-15 13:02:21.199670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.199675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.199679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.199690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.086 [2024-10-15 13:02:21.199707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.207611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.207621] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.086 [2024-10-15 13:02:21.207624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.086 [2024-10-15 13:02:21.207637] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.086 [2024-10-15 13:02:21.207643] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:01.086 [2024-10-15 13:02:21.207648] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:01.086 [2024-10-15 13:02:21.207660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.207674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.086 [2024-10-15 13:02:21.207687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.207839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.207845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.086 [2024-10-15 13:02:21.207848] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207851] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.086 [2024-10-15 13:02:21.207856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:01.086 [2024-10-15 13:02:21.207862] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:01.086 [2024-10-15 13:02:21.207868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.207881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.086 [2024-10-15 13:02:21.207891] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.207974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.207980] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.086 [2024-10-15 13:02:21.207983] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.207986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.086 [2024-10-15 13:02:21.207990] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:01.086 [2024-10-15 13:02:21.207997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.086 [2024-10-15 13:02:21.208005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208009] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.208018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.086 [2024-10-15 13:02:21.208027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.208095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.208101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.086 [2024-10-15 13:02:21.208104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.086 [2024-10-15 13:02:21.208112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.086 [2024-10-15 13:02:21.208119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208123] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.208132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.086 [2024-10-15 13:02:21.208141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.208200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.208205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.086 [2024-10-15 13:02:21.208208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208211] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.086 [2024-10-15 13:02:21.208215] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:01.086 [2024-10-15 13:02:21.208219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:01.086 [2024-10-15 13:02:21.208226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.086 [2024-10-15 13:02:21.208331] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:01.086 [2024-10-15 13:02:21.208335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.086 [2024-10-15 13:02:21.208342] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.086 [2024-10-15 13:02:21.208349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.086 [2024-10-15 13:02:21.208354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.086 [2024-10-15 13:02:21.208363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.086 [2024-10-15 13:02:21.208437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.086 [2024-10-15 13:02:21.208442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.087 [2024-10-15 13:02:21.208446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208449] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.087 [2024-10-15 13:02:21.208455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.087 [2024-10-15 13:02:21.208464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208467] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.087 [2024-10-15 13:02:21.208486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.087 [2024-10-15 13:02:21.208549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.087 [2024-10-15 13:02:21.208557] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.087 [2024-10-15 13:02:21.208561] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208567] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.087 [2024-10-15 13:02:21.208571] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.087 [2024-10-15 13:02:21.208575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.208582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:01.087 [2024-10-15 13:02:21.208591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.208598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208607] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.087 [2024-10-15 13:02:21.208622] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.087 [2024-10-15 13:02:21.208719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.087 [2024-10-15 13:02:21.208725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.087 [2024-10-15 13:02:21.208728] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14f7760): datao=0, datal=4096, cccid=0 00:22:01.087 [2024-10-15 13:02:21.208735] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1557480) on tqpair(0x14f7760): expected_datao=0, payload_size=4096 00:22:01.087 [2024-10-15 13:02:21.208739] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208746] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208750] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208760] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.087 [2024-10-15 13:02:21.208765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.087 [2024-10-15 13:02:21.208768] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.087 [2024-10-15 13:02:21.208778] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:01.087 [2024-10-15 13:02:21.208783] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:01.087 [2024-10-15 13:02:21.208789] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:01.087 [2024-10-15 13:02:21.208794] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:01.087 [2024-10-15 13:02:21.208798] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:01.087 [2024-10-15 13:02:21.208802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.208809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.208814] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208818] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208821] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.087 [2024-10-15 13:02:21.208836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.087 [2024-10-15 13:02:21.208904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.087 [2024-10-15 13:02:21.208909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.087 [2024-10-15 13:02:21.208912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.087 [2024-10-15 13:02:21.208924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.087 [2024-10-15 13:02:21.208940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.087 [2024-10-15 13:02:21.208956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208962] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.087 [2024-10-15 13:02:21.208972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208975] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.208978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.208983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.087 [2024-10-15 13:02:21.208987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.208995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.087 [2024-10-15 13:02:21.209000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209005] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.209011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.087 [2024-10-15 13:02:21.209022] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557480, cid 0, qid 0 00:22:01.087 [2024-10-15 13:02:21.209027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557600, cid 1, qid 0 00:22:01.087 [2024-10-15 13:02:21.209031] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557780, cid 2, qid 0 00:22:01.087 [2024-10-15 13:02:21.209035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.087 [2024-10-15 13:02:21.209039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557a80, cid 4, qid 0 00:22:01.087 [2024-10-15 13:02:21.209133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.087 [2024-10-15 13:02:21.209138] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.087 [2024-10-15 13:02:21.209141] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209144] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557a80) on tqpair=0x14f7760 00:22:01.087 [2024-10-15 13:02:21.209151] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:01.087 [2024-10-15 13:02:21.209156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:01.087 [2024-10-15 13:02:21.209165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209170] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14f7760) 00:22:01.087 [2024-10-15 13:02:21.209177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.087 [2024-10-15 13:02:21.209186] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557a80, cid 4, qid 0 00:22:01.087 [2024-10-15 13:02:21.209258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.087 [2024-10-15 13:02:21.209264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.087 [2024-10-15 13:02:21.209267] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209270] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14f7760): datao=0, datal=4096, cccid=4 00:22:01.087 [2024-10-15 13:02:21.209274] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1557a80) on tqpair(0x14f7760): expected_datao=0, payload_size=4096 00:22:01.087 [2024-10-15 13:02:21.209277] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209289] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.209292] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.087 [2024-10-15 13:02:21.249676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.087 [2024-10-15 13:02:21.249688] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.088 [2024-10-15 13:02:21.249692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249695] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557a80) on tqpair=0x14f7760 00:22:01.088 [2024-10-15 13:02:21.249708] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:01.088 [2024-10-15 13:02:21.249733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14f7760) 00:22:01.088 [2024-10-15 13:02:21.249745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.088 [2024-10-15 13:02:21.249751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249759] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14f7760) 00:22:01.088 [2024-10-15 13:02:21.249767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.088 [2024-10-15 13:02:21.249780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557a80, cid 4, qid 0 00:22:01.088 [2024-10-15 13:02:21.249784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557c00, cid 5, qid 0 00:22:01.088 [2024-10-15 13:02:21.249883] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.088 [2024-10-15 13:02:21.249889] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.088 [2024-10-15 13:02:21.249892] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249895] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14f7760): datao=0, datal=1024, cccid=4 00:22:01.088 [2024-10-15 13:02:21.249899] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1557a80) on tqpair(0x14f7760): expected_datao=0, payload_size=1024 00:22:01.088 [2024-10-15 13:02:21.249903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249908] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249911] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.088 [2024-10-15 13:02:21.249921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.088 [2024-10-15 13:02:21.249924] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.249927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557c00) on tqpair=0x14f7760 00:22:01.088 [2024-10-15 13:02:21.290672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.088 [2024-10-15 13:02:21.290683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.088 [2024-10-15 13:02:21.290686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557a80) on tqpair=0x14f7760 00:22:01.088 [2024-10-15 13:02:21.290705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14f7760) 00:22:01.088 [2024-10-15 13:02:21.290716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.088 [2024-10-15 13:02:21.290732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557a80, cid 4, qid 0 00:22:01.088 [2024-10-15 13:02:21.290800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.088 [2024-10-15 13:02:21.290806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.088 [2024-10-15 13:02:21.290809] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290812] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14f7760): datao=0, datal=3072, cccid=4 00:22:01.088 [2024-10-15 13:02:21.290816] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1557a80) on tqpair(0x14f7760): expected_datao=0, payload_size=3072 00:22:01.088 [2024-10-15 13:02:21.290820] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290835] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290839] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.088 [2024-10-15 13:02:21.290879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.088 [2024-10-15 13:02:21.290882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290885] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557a80) on tqpair=0x14f7760 00:22:01.088 [2024-10-15 13:02:21.290896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14f7760) 00:22:01.088 [2024-10-15 13:02:21.290905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.088 [2024-10-15 13:02:21.290917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557a80, cid 4, qid 0 00:22:01.088 [2024-10-15 13:02:21.290986] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.088 [2024-10-15 13:02:21.290992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.088 [2024-10-15 13:02:21.290995] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.290998] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14f7760): datao=0, datal=8, cccid=4 00:22:01.088 [2024-10-15 13:02:21.291002] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1557a80) on tqpair(0x14f7760): expected_datao=0, payload_size=8 00:22:01.088 [2024-10-15 13:02:21.291005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.291011] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.291014] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.331657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.088 [2024-10-15 13:02:21.331667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.088 [2024-10-15 13:02:21.331670] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.088 [2024-10-15 13:02:21.331673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557a80) on tqpair=0x14f7760 00:22:01.088 ===================================================== 00:22:01.088 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:01.088 ===================================================== 00:22:01.088 Controller Capabilities/Features 00:22:01.088 ================================ 00:22:01.088 Vendor ID: 0000 00:22:01.088 Subsystem Vendor ID: 0000 00:22:01.088 Serial Number: .................... 00:22:01.088 Model Number: ........................................ 00:22:01.088 Firmware Version: 25.01 00:22:01.088 Recommended Arb Burst: 0 00:22:01.088 IEEE OUI Identifier: 00 00 00 00:22:01.088 Multi-path I/O 00:22:01.088 May have multiple subsystem ports: No 00:22:01.088 May have multiple controllers: No 00:22:01.088 Associated with SR-IOV VF: No 00:22:01.088 Max Data Transfer Size: 131072 00:22:01.088 Max Number of Namespaces: 0 00:22:01.088 Max Number of I/O Queues: 1024 00:22:01.088 NVMe Specification Version (VS): 1.3 00:22:01.088 NVMe Specification Version (Identify): 1.3 00:22:01.088 Maximum Queue Entries: 128 00:22:01.088 Contiguous Queues Required: Yes 00:22:01.088 Arbitration Mechanisms Supported 00:22:01.088 Weighted Round Robin: Not Supported 00:22:01.088 Vendor Specific: Not Supported 00:22:01.088 Reset Timeout: 15000 ms 00:22:01.088 Doorbell Stride: 4 bytes 00:22:01.088 NVM Subsystem Reset: Not Supported 00:22:01.088 Command Sets Supported 00:22:01.088 NVM Command Set: Supported 00:22:01.088 Boot Partition: Not Supported 00:22:01.088 Memory Page Size Minimum: 4096 bytes 00:22:01.088 Memory Page Size Maximum: 4096 bytes 00:22:01.088 Persistent Memory Region: Not Supported 00:22:01.088 Optional Asynchronous Events Supported 00:22:01.088 Namespace Attribute Notices: Not Supported 00:22:01.088 Firmware Activation Notices: Not Supported 00:22:01.088 ANA Change Notices: Not Supported 00:22:01.088 PLE Aggregate Log Change Notices: Not Supported 00:22:01.088 LBA Status Info Alert Notices: Not Supported 00:22:01.088 EGE Aggregate Log Change Notices: Not Supported 00:22:01.088 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.088 Zone Descriptor Change Notices: Not Supported 00:22:01.088 Discovery Log Change Notices: Supported 00:22:01.088 Controller Attributes 00:22:01.088 128-bit Host Identifier: Not Supported 00:22:01.088 Non-Operational Permissive Mode: Not Supported 00:22:01.088 NVM Sets: Not Supported 00:22:01.088 Read Recovery Levels: Not Supported 00:22:01.088 Endurance Groups: Not Supported 00:22:01.088 Predictable Latency Mode: Not Supported 00:22:01.088 Traffic Based Keep ALive: Not Supported 00:22:01.088 Namespace Granularity: Not Supported 00:22:01.088 SQ Associations: Not Supported 00:22:01.088 UUID List: Not Supported 00:22:01.088 Multi-Domain Subsystem: Not Supported 00:22:01.088 Fixed Capacity Management: Not Supported 00:22:01.088 Variable Capacity Management: Not Supported 00:22:01.088 Delete Endurance Group: Not Supported 00:22:01.088 Delete NVM Set: Not Supported 00:22:01.088 Extended LBA Formats Supported: Not Supported 00:22:01.089 Flexible Data Placement Supported: Not Supported 00:22:01.089 00:22:01.089 Controller Memory Buffer Support 00:22:01.089 ================================ 00:22:01.089 Supported: No 00:22:01.089 00:22:01.089 Persistent Memory Region Support 00:22:01.089 ================================ 00:22:01.089 Supported: No 00:22:01.089 00:22:01.089 Admin Command Set Attributes 00:22:01.089 ============================ 00:22:01.089 Security Send/Receive: Not Supported 00:22:01.089 Format NVM: Not Supported 00:22:01.089 Firmware Activate/Download: Not Supported 00:22:01.089 Namespace Management: Not Supported 00:22:01.089 Device Self-Test: Not Supported 00:22:01.089 Directives: Not Supported 00:22:01.089 NVMe-MI: Not Supported 00:22:01.089 Virtualization Management: Not Supported 00:22:01.089 Doorbell Buffer Config: Not Supported 00:22:01.089 Get LBA Status Capability: Not Supported 00:22:01.089 Command & Feature Lockdown Capability: Not Supported 00:22:01.089 Abort Command Limit: 1 00:22:01.089 Async Event Request Limit: 4 00:22:01.089 Number of Firmware Slots: N/A 00:22:01.089 Firmware Slot 1 Read-Only: N/A 00:22:01.089 Firmware Activation Without Reset: N/A 00:22:01.089 Multiple Update Detection Support: N/A 00:22:01.089 Firmware Update Granularity: No Information Provided 00:22:01.089 Per-Namespace SMART Log: No 00:22:01.089 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.089 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:01.089 Command Effects Log Page: Not Supported 00:22:01.089 Get Log Page Extended Data: Supported 00:22:01.089 Telemetry Log Pages: Not Supported 00:22:01.089 Persistent Event Log Pages: Not Supported 00:22:01.089 Supported Log Pages Log Page: May Support 00:22:01.089 Commands Supported & Effects Log Page: Not Supported 00:22:01.089 Feature Identifiers & Effects Log Page:May Support 00:22:01.089 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.089 Data Area 4 for Telemetry Log: Not Supported 00:22:01.089 Error Log Page Entries Supported: 128 00:22:01.089 Keep Alive: Not Supported 00:22:01.089 00:22:01.089 NVM Command Set Attributes 00:22:01.089 ========================== 00:22:01.089 Submission Queue Entry Size 00:22:01.089 Max: 1 00:22:01.089 Min: 1 00:22:01.089 Completion Queue Entry Size 00:22:01.089 Max: 1 00:22:01.089 Min: 1 00:22:01.089 Number of Namespaces: 0 00:22:01.089 Compare Command: Not Supported 00:22:01.089 Write Uncorrectable Command: Not Supported 00:22:01.089 Dataset Management Command: Not Supported 00:22:01.089 Write Zeroes Command: Not Supported 00:22:01.089 Set Features Save Field: Not Supported 00:22:01.089 Reservations: Not Supported 00:22:01.089 Timestamp: Not Supported 00:22:01.089 Copy: Not Supported 00:22:01.089 Volatile Write Cache: Not Present 00:22:01.089 Atomic Write Unit (Normal): 1 00:22:01.089 Atomic Write Unit (PFail): 1 00:22:01.089 Atomic Compare & Write Unit: 1 00:22:01.089 Fused Compare & Write: Supported 00:22:01.089 Scatter-Gather List 00:22:01.089 SGL Command Set: Supported 00:22:01.089 SGL Keyed: Supported 00:22:01.089 SGL Bit Bucket Descriptor: Not Supported 00:22:01.089 SGL Metadata Pointer: Not Supported 00:22:01.089 Oversized SGL: Not Supported 00:22:01.089 SGL Metadata Address: Not Supported 00:22:01.089 SGL Offset: Supported 00:22:01.089 Transport SGL Data Block: Not Supported 00:22:01.089 Replay Protected Memory Block: Not Supported 00:22:01.089 00:22:01.089 Firmware Slot Information 00:22:01.089 ========================= 00:22:01.089 Active slot: 0 00:22:01.089 00:22:01.089 00:22:01.089 Error Log 00:22:01.089 ========= 00:22:01.089 00:22:01.089 Active Namespaces 00:22:01.089 ================= 00:22:01.089 Discovery Log Page 00:22:01.089 ================== 00:22:01.089 Generation Counter: 2 00:22:01.089 Number of Records: 2 00:22:01.089 Record Format: 0 00:22:01.089 00:22:01.089 Discovery Log Entry 0 00:22:01.089 ---------------------- 00:22:01.089 Transport Type: 3 (TCP) 00:22:01.089 Address Family: 1 (IPv4) 00:22:01.089 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:01.089 Entry Flags: 00:22:01.089 Duplicate Returned Information: 1 00:22:01.089 Explicit Persistent Connection Support for Discovery: 1 00:22:01.089 Transport Requirements: 00:22:01.089 Secure Channel: Not Required 00:22:01.089 Port ID: 0 (0x0000) 00:22:01.089 Controller ID: 65535 (0xffff) 00:22:01.089 Admin Max SQ Size: 128 00:22:01.089 Transport Service Identifier: 4420 00:22:01.089 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:01.089 Transport Address: 10.0.0.2 00:22:01.089 Discovery Log Entry 1 00:22:01.089 ---------------------- 00:22:01.089 Transport Type: 3 (TCP) 00:22:01.089 Address Family: 1 (IPv4) 00:22:01.089 Subsystem Type: 2 (NVM Subsystem) 00:22:01.089 Entry Flags: 00:22:01.089 Duplicate Returned Information: 0 00:22:01.089 Explicit Persistent Connection Support for Discovery: 0 00:22:01.089 Transport Requirements: 00:22:01.089 Secure Channel: Not Required 00:22:01.089 Port ID: 0 (0x0000) 00:22:01.089 Controller ID: 65535 (0xffff) 00:22:01.089 Admin Max SQ Size: 128 00:22:01.089 Transport Service Identifier: 4420 00:22:01.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:01.089 Transport Address: 10.0.0.2 [2024-10-15 13:02:21.331747] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:01.089 [2024-10-15 13:02:21.331757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557480) on tqpair=0x14f7760 00:22:01.089 [2024-10-15 13:02:21.331763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.089 [2024-10-15 13:02:21.331768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557600) on tqpair=0x14f7760 00:22:01.089 [2024-10-15 13:02:21.331772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.089 [2024-10-15 13:02:21.331776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557780) on tqpair=0x14f7760 00:22:01.089 [2024-10-15 13:02:21.331780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.089 [2024-10-15 13:02:21.331784] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.089 [2024-10-15 13:02:21.331788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.090 [2024-10-15 13:02:21.331795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331802] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.331809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.331822] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.331877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.331883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.331886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.331897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.331909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.331921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.331986] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.331992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.331995] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.331998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332004] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:01.090 [2024-10-15 13:02:21.332008] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:01.090 [2024-10-15 13:02:21.332016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332020] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332023] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332116] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332119] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332122] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332202] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332205] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332299] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332423] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332456] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332517] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332525] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332536] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332540] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332543] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332758] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332838] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332847] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332850] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.090 [2024-10-15 13:02:21.332858] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332862] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.090 [2024-10-15 13:02:21.332865] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.090 [2024-10-15 13:02:21.332870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.090 [2024-10-15 13:02:21.332879] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.090 [2024-10-15 13:02:21.332945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.090 [2024-10-15 13:02:21.332951] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.090 [2024-10-15 13:02:21.332954] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.332957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.332965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.332969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.332972] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.332977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.332986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333062] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333150] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333161] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333263] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333358] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333438] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333443] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333449] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333457] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333460] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333463] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333478] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333536] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333578] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333658] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333661] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333669] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333755] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333758] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333762] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333791] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333857] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333871] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333878] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.333893] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.333962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.333967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.333970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333973] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.333982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333985] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.333988] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.091 [2024-10-15 13:02:21.333995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.091 [2024-10-15 13:02:21.334005] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.091 [2024-10-15 13:02:21.334065] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.091 [2024-10-15 13:02:21.334070] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.091 [2024-10-15 13:02:21.334073] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.334077] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.091 [2024-10-15 13:02:21.334084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.334088] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.091 [2024-10-15 13:02:21.334091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334172] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334269] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334277] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334280] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334382] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334397] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334610] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334617] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334697] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334711] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334806] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334819] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.334904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.334907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.334918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.334925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.334930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.334939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.334995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.335001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.335004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.335015] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335018] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335021] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.335027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.335036] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.335104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.335109] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.335112] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335115] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.335123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335130] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.335135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.335144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.092 [2024-10-15 13:02:21.335200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.092 [2024-10-15 13:02:21.335206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.092 [2024-10-15 13:02:21.335209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.092 [2024-10-15 13:02:21.335220] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335223] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.092 [2024-10-15 13:02:21.335226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.092 [2024-10-15 13:02:21.335232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.092 [2024-10-15 13:02:21.335241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.093 [2024-10-15 13:02:21.335296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.093 [2024-10-15 13:02:21.335301] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.093 [2024-10-15 13:02:21.335304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335308] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.093 [2024-10-15 13:02:21.335316] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335319] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.093 [2024-10-15 13:02:21.335328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.093 [2024-10-15 13:02:21.335337] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.093 [2024-10-15 13:02:21.335403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.093 [2024-10-15 13:02:21.335409] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.093 [2024-10-15 13:02:21.335411] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335415] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.093 [2024-10-15 13:02:21.335423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335430] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.093 [2024-10-15 13:02:21.335435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.093 [2024-10-15 13:02:21.335445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.093 [2024-10-15 13:02:21.335504] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.093 [2024-10-15 13:02:21.335510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.093 [2024-10-15 13:02:21.335513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.093 [2024-10-15 13:02:21.335524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335527] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.335530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.093 [2024-10-15 13:02:21.335536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.093 [2024-10-15 13:02:21.335545] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.093 [2024-10-15 13:02:21.339609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.093 [2024-10-15 13:02:21.339617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.093 [2024-10-15 13:02:21.339629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.339632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.093 [2024-10-15 13:02:21.339644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.339647] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.339650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14f7760) 00:22:01.093 [2024-10-15 13:02:21.339657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.093 [2024-10-15 13:02:21.339668] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1557900, cid 3, qid 0 00:22:01.093 [2024-10-15 13:02:21.339734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.093 [2024-10-15 13:02:21.339742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.093 [2024-10-15 13:02:21.339745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.093 [2024-10-15 13:02:21.339749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1557900) on tqpair=0x14f7760 00:22:01.093 [2024-10-15 13:02:21.339755] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:01.093 00:22:01.093 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:01.093 [2024-10-15 13:02:21.377047] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:01.093 [2024-10-15 13:02:21.377082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294103 ] 00:22:01.355 [2024-10-15 13:02:21.402599] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:01.355 [2024-10-15 13:02:21.406647] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.355 [2024-10-15 13:02:21.406652] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.355 [2024-10-15 13:02:21.406662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.355 [2024-10-15 13:02:21.406669] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.355 [2024-10-15 13:02:21.406991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:01.355 [2024-10-15 13:02:21.407014] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11c0760 0 00:22:01.355 [2024-10-15 13:02:21.420608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.355 [2024-10-15 13:02:21.420626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.355 [2024-10-15 13:02:21.420630] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.355 [2024-10-15 13:02:21.420633] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.355 [2024-10-15 13:02:21.420655] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.355 [2024-10-15 13:02:21.420660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.355 [2024-10-15 13:02:21.420663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.355 [2024-10-15 13:02:21.420673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.355 [2024-10-15 13:02:21.420689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.355 [2024-10-15 13:02:21.427610] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.355 [2024-10-15 13:02:21.427619] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.355 [2024-10-15 13:02:21.427622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.355 [2024-10-15 13:02:21.427626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.355 [2024-10-15 13:02:21.427637] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.355 [2024-10-15 13:02:21.427642] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:01.355 [2024-10-15 13:02:21.427647] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:01.355 [2024-10-15 13:02:21.427660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.355 [2024-10-15 13:02:21.427664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.355 [2024-10-15 13:02:21.427667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.355 [2024-10-15 13:02:21.427674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.427687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.427764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.427769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.427772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.427780] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:01.356 [2024-10-15 13:02:21.427786] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:01.356 [2024-10-15 13:02:21.427792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.427804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.427814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.427879] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.427884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.427887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.427895] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:01.356 [2024-10-15 13:02:21.427902] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.427907] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.427914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.427919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.427929] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.427996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.428001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.428004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428008] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.428012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.428019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.428034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.428043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.428114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.428120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.428123] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.428129] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:01.356 [2024-10-15 13:02:21.428134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.428140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.428245] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:01.356 [2024-10-15 13:02:21.428248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.428254] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.428266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.428276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.428335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.428341] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.428344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428347] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.428351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.356 [2024-10-15 13:02:21.428358] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.428370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.428380] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.428441] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.428446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.428449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.428456] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.356 [2024-10-15 13:02:21.428460] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:01.356 [2024-10-15 13:02:21.428467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:01.356 [2024-10-15 13:02:21.428479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.356 [2024-10-15 13:02:21.428487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.428495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.356 [2024-10-15 13:02:21.428505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.356 [2024-10-15 13:02:21.428593] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.356 [2024-10-15 13:02:21.428598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.356 [2024-10-15 13:02:21.428607] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428611] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=4096, cccid=0 00:22:01.356 [2024-10-15 13:02:21.428615] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220480) on tqpair(0x11c0760): expected_datao=0, payload_size=4096 00:22:01.356 [2024-10-15 13:02:21.428618] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428630] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428634] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.356 [2024-10-15 13:02:21.428674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.356 [2024-10-15 13:02:21.428677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.356 [2024-10-15 13:02:21.428686] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:01.356 [2024-10-15 13:02:21.428690] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:01.356 [2024-10-15 13:02:21.428694] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:01.356 [2024-10-15 13:02:21.428697] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:01.356 [2024-10-15 13:02:21.428701] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:01.356 [2024-10-15 13:02:21.428705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:01.356 [2024-10-15 13:02:21.428712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.356 [2024-10-15 13:02:21.428718] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428721] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.356 [2024-10-15 13:02:21.428724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.356 [2024-10-15 13:02:21.428730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.356 [2024-10-15 13:02:21.428740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.357 [2024-10-15 13:02:21.428802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.357 [2024-10-15 13:02:21.428808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.357 [2024-10-15 13:02:21.428811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.357 [2024-10-15 13:02:21.428823] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428826] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428830] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.428835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.357 [2024-10-15 13:02:21.428840] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428846] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.428851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.357 [2024-10-15 13:02:21.428856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.428867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.357 [2024-10-15 13:02:21.428872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428878] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.428883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.357 [2024-10-15 13:02:21.428887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.428895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.428900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.428903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.428909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.357 [2024-10-15 13:02:21.428919] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220480, cid 0, qid 0 00:22:01.357 [2024-10-15 13:02:21.428924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220600, cid 1, qid 0 00:22:01.357 [2024-10-15 13:02:21.428928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220780, cid 2, qid 0 00:22:01.357 [2024-10-15 13:02:21.428932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.357 [2024-10-15 13:02:21.428936] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.357 [2024-10-15 13:02:21.429026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.357 [2024-10-15 13:02:21.429032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.357 [2024-10-15 13:02:21.429034] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429038] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.357 [2024-10-15 13:02:21.429043] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:01.357 [2024-10-15 13:02:21.429048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.429078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.357 [2024-10-15 13:02:21.429088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.357 [2024-10-15 13:02:21.429152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.357 [2024-10-15 13:02:21.429158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.357 [2024-10-15 13:02:21.429161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.357 [2024-10-15 13:02:21.429214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429223] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429229] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429232] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.429238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.357 [2024-10-15 13:02:21.429248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.357 [2024-10-15 13:02:21.429317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.357 [2024-10-15 13:02:21.429323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.357 [2024-10-15 13:02:21.429326] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429329] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=4096, cccid=4 00:22:01.357 [2024-10-15 13:02:21.429332] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220a80) on tqpair(0x11c0760): expected_datao=0, payload_size=4096 00:22:01.357 [2024-10-15 13:02:21.429336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429351] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429355] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.357 [2024-10-15 13:02:21.429400] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.357 [2024-10-15 13:02:21.429403] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.357 [2024-10-15 13:02:21.429414] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:01.357 [2024-10-15 13:02:21.429423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.429439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429442] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.429447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.357 [2024-10-15 13:02:21.429458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.357 [2024-10-15 13:02:21.429542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.357 [2024-10-15 13:02:21.429548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.357 [2024-10-15 13:02:21.429551] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=4096, cccid=4 00:22:01.357 [2024-10-15 13:02:21.429558] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220a80) on tqpair(0x11c0760): expected_datao=0, payload_size=4096 00:22:01.357 [2024-10-15 13:02:21.429562] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429571] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.429575] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.472611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.357 [2024-10-15 13:02:21.472622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.357 [2024-10-15 13:02:21.472625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.472628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.357 [2024-10-15 13:02:21.472641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.472651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:01.357 [2024-10-15 13:02:21.472658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.472662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.357 [2024-10-15 13:02:21.472669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.357 [2024-10-15 13:02:21.472681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.357 [2024-10-15 13:02:21.472755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.357 [2024-10-15 13:02:21.472761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.357 [2024-10-15 13:02:21.472764] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.357 [2024-10-15 13:02:21.472767] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=4096, cccid=4 00:22:01.358 [2024-10-15 13:02:21.472771] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220a80) on tqpair(0x11c0760): expected_datao=0, payload_size=4096 00:22:01.358 [2024-10-15 13:02:21.472774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.472785] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.472788] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.513703] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.513707] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513710] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.513719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513752] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513757] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:01.358 [2024-10-15 13:02:21.513761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:01.358 [2024-10-15 13:02:21.513766] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:01.358 [2024-10-15 13:02:21.513779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.513790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.513796] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513800] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.513808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.358 [2024-10-15 13:02:21.513820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.358 [2024-10-15 13:02:21.513825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220c00, cid 5, qid 0 00:22:01.358 [2024-10-15 13:02:21.513898] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.513904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.513907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513911] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.513917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.513922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.513925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513928] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220c00) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.513937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.513941] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.513946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.513956] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220c00, cid 5, qid 0 00:22:01.358 [2024-10-15 13:02:21.514024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.514030] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.514033] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220c00) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.514043] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220c00, cid 5, qid 0 00:22:01.358 [2024-10-15 13:02:21.514122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.514129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.514132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514135] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220c00) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.514143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514162] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220c00, cid 5, qid 0 00:22:01.358 [2024-10-15 13:02:21.514223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.358 [2024-10-15 13:02:21.514229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.358 [2024-10-15 13:02:21.514232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220c00) on tqpair=0x11c0760 00:22:01.358 [2024-10-15 13:02:21.514248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514299] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11c0760) 00:22:01.358 [2024-10-15 13:02:21.514305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.358 [2024-10-15 13:02:21.514315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220c00, cid 5, qid 0 00:22:01.358 [2024-10-15 13:02:21.514320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220a80, cid 4, qid 0 00:22:01.358 [2024-10-15 13:02:21.514324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220d80, cid 6, qid 0 00:22:01.358 [2024-10-15 13:02:21.514328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220f00, cid 7, qid 0 00:22:01.358 [2024-10-15 13:02:21.514469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.358 [2024-10-15 13:02:21.514475] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.358 [2024-10-15 13:02:21.514478] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514483] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=8192, cccid=5 00:22:01.358 [2024-10-15 13:02:21.514488] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220c00) on tqpair(0x11c0760): expected_datao=0, payload_size=8192 00:22:01.358 [2024-10-15 13:02:21.514492] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514511] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514515] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.358 [2024-10-15 13:02:21.514525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.358 [2024-10-15 13:02:21.514528] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514531] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=512, cccid=4 00:22:01.358 [2024-10-15 13:02:21.514535] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220a80) on tqpair(0x11c0760): expected_datao=0, payload_size=512 00:22:01.358 [2024-10-15 13:02:21.514539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.358 [2024-10-15 13:02:21.514545] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514548] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.359 [2024-10-15 13:02:21.514558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.359 [2024-10-15 13:02:21.514561] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=512, cccid=6 00:22:01.359 [2024-10-15 13:02:21.514568] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220d80) on tqpair(0x11c0760): expected_datao=0, payload_size=512 00:22:01.359 [2024-10-15 13:02:21.514572] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514577] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514580] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.359 [2024-10-15 13:02:21.514590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.359 [2024-10-15 13:02:21.514593] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514596] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c0760): datao=0, datal=4096, cccid=7 00:22:01.359 [2024-10-15 13:02:21.514605] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1220f00) on tqpair(0x11c0760): expected_datao=0, payload_size=4096 00:22:01.359 [2024-10-15 13:02:21.514609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514616] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514619] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514627] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.359 [2024-10-15 13:02:21.514632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.359 [2024-10-15 13:02:21.514635] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220c00) on tqpair=0x11c0760 00:22:01.359 [2024-10-15 13:02:21.514649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.359 [2024-10-15 13:02:21.514654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.359 [2024-10-15 13:02:21.514657] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514660] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220a80) on tqpair=0x11c0760 00:22:01.359 [2024-10-15 13:02:21.514669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.359 [2024-10-15 13:02:21.514676] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.359 [2024-10-15 13:02:21.514679] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514682] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220d80) on tqpair=0x11c0760 00:22:01.359 [2024-10-15 13:02:21.514688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.359 [2024-10-15 13:02:21.514694] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.359 [2024-10-15 13:02:21.514697] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.359 [2024-10-15 13:02:21.514700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220f00) on tqpair=0x11c0760 00:22:01.359 ===================================================== 00:22:01.359 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.359 ===================================================== 00:22:01.359 Controller Capabilities/Features 00:22:01.359 ================================ 00:22:01.359 Vendor ID: 8086 00:22:01.359 Subsystem Vendor ID: 8086 00:22:01.359 Serial Number: SPDK00000000000001 00:22:01.359 Model Number: SPDK bdev Controller 00:22:01.359 Firmware Version: 25.01 00:22:01.359 Recommended Arb Burst: 6 00:22:01.359 IEEE OUI Identifier: e4 d2 5c 00:22:01.359 Multi-path I/O 00:22:01.359 May have multiple subsystem ports: Yes 00:22:01.359 May have multiple controllers: Yes 00:22:01.359 Associated with SR-IOV VF: No 00:22:01.359 Max Data Transfer Size: 131072 00:22:01.359 Max Number of Namespaces: 32 00:22:01.359 Max Number of I/O Queues: 127 00:22:01.359 NVMe Specification Version (VS): 1.3 00:22:01.359 NVMe Specification Version (Identify): 1.3 00:22:01.359 Maximum Queue Entries: 128 00:22:01.359 Contiguous Queues Required: Yes 00:22:01.359 Arbitration Mechanisms Supported 00:22:01.359 Weighted Round Robin: Not Supported 00:22:01.359 Vendor Specific: Not Supported 00:22:01.359 Reset Timeout: 15000 ms 00:22:01.359 Doorbell Stride: 4 bytes 00:22:01.359 NVM Subsystem Reset: Not Supported 00:22:01.359 Command Sets Supported 00:22:01.359 NVM Command Set: Supported 00:22:01.359 Boot Partition: Not Supported 00:22:01.359 Memory Page Size Minimum: 4096 bytes 00:22:01.359 Memory Page Size Maximum: 4096 bytes 00:22:01.359 Persistent Memory Region: Not Supported 00:22:01.359 Optional Asynchronous Events Supported 00:22:01.359 Namespace Attribute Notices: Supported 00:22:01.359 Firmware Activation Notices: Not Supported 00:22:01.359 ANA Change Notices: Not Supported 00:22:01.359 PLE Aggregate Log Change Notices: Not Supported 00:22:01.359 LBA Status Info Alert Notices: Not Supported 00:22:01.359 EGE Aggregate Log Change Notices: Not Supported 00:22:01.359 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.359 Zone Descriptor Change Notices: Not Supported 00:22:01.359 Discovery Log Change Notices: Not Supported 00:22:01.359 Controller Attributes 00:22:01.359 128-bit Host Identifier: Supported 00:22:01.359 Non-Operational Permissive Mode: Not Supported 00:22:01.359 NVM Sets: Not Supported 00:22:01.359 Read Recovery Levels: Not Supported 00:22:01.359 Endurance Groups: Not Supported 00:22:01.359 Predictable Latency Mode: Not Supported 00:22:01.359 Traffic Based Keep ALive: Not Supported 00:22:01.359 Namespace Granularity: Not Supported 00:22:01.359 SQ Associations: Not Supported 00:22:01.359 UUID List: Not Supported 00:22:01.359 Multi-Domain Subsystem: Not Supported 00:22:01.359 Fixed Capacity Management: Not Supported 00:22:01.359 Variable Capacity Management: Not Supported 00:22:01.359 Delete Endurance Group: Not Supported 00:22:01.359 Delete NVM Set: Not Supported 00:22:01.359 Extended LBA Formats Supported: Not Supported 00:22:01.359 Flexible Data Placement Supported: Not Supported 00:22:01.359 00:22:01.359 Controller Memory Buffer Support 00:22:01.359 ================================ 00:22:01.359 Supported: No 00:22:01.359 00:22:01.359 Persistent Memory Region Support 00:22:01.359 ================================ 00:22:01.359 Supported: No 00:22:01.359 00:22:01.359 Admin Command Set Attributes 00:22:01.359 ============================ 00:22:01.359 Security Send/Receive: Not Supported 00:22:01.359 Format NVM: Not Supported 00:22:01.359 Firmware Activate/Download: Not Supported 00:22:01.359 Namespace Management: Not Supported 00:22:01.359 Device Self-Test: Not Supported 00:22:01.359 Directives: Not Supported 00:22:01.359 NVMe-MI: Not Supported 00:22:01.359 Virtualization Management: Not Supported 00:22:01.359 Doorbell Buffer Config: Not Supported 00:22:01.359 Get LBA Status Capability: Not Supported 00:22:01.359 Command & Feature Lockdown Capability: Not Supported 00:22:01.359 Abort Command Limit: 4 00:22:01.359 Async Event Request Limit: 4 00:22:01.359 Number of Firmware Slots: N/A 00:22:01.359 Firmware Slot 1 Read-Only: N/A 00:22:01.359 Firmware Activation Without Reset: N/A 00:22:01.359 Multiple Update Detection Support: N/A 00:22:01.359 Firmware Update Granularity: No Information Provided 00:22:01.359 Per-Namespace SMART Log: No 00:22:01.359 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.359 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:01.359 Command Effects Log Page: Supported 00:22:01.359 Get Log Page Extended Data: Supported 00:22:01.359 Telemetry Log Pages: Not Supported 00:22:01.359 Persistent Event Log Pages: Not Supported 00:22:01.359 Supported Log Pages Log Page: May Support 00:22:01.359 Commands Supported & Effects Log Page: Not Supported 00:22:01.359 Feature Identifiers & Effects Log Page:May Support 00:22:01.359 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.359 Data Area 4 for Telemetry Log: Not Supported 00:22:01.359 Error Log Page Entries Supported: 128 00:22:01.359 Keep Alive: Supported 00:22:01.359 Keep Alive Granularity: 10000 ms 00:22:01.359 00:22:01.359 NVM Command Set Attributes 00:22:01.359 ========================== 00:22:01.359 Submission Queue Entry Size 00:22:01.359 Max: 64 00:22:01.359 Min: 64 00:22:01.359 Completion Queue Entry Size 00:22:01.359 Max: 16 00:22:01.359 Min: 16 00:22:01.359 Number of Namespaces: 32 00:22:01.359 Compare Command: Supported 00:22:01.359 Write Uncorrectable Command: Not Supported 00:22:01.359 Dataset Management Command: Supported 00:22:01.359 Write Zeroes Command: Supported 00:22:01.359 Set Features Save Field: Not Supported 00:22:01.359 Reservations: Supported 00:22:01.359 Timestamp: Not Supported 00:22:01.359 Copy: Supported 00:22:01.359 Volatile Write Cache: Present 00:22:01.359 Atomic Write Unit (Normal): 1 00:22:01.359 Atomic Write Unit (PFail): 1 00:22:01.359 Atomic Compare & Write Unit: 1 00:22:01.359 Fused Compare & Write: Supported 00:22:01.359 Scatter-Gather List 00:22:01.359 SGL Command Set: Supported 00:22:01.359 SGL Keyed: Supported 00:22:01.359 SGL Bit Bucket Descriptor: Not Supported 00:22:01.359 SGL Metadata Pointer: Not Supported 00:22:01.359 Oversized SGL: Not Supported 00:22:01.359 SGL Metadata Address: Not Supported 00:22:01.359 SGL Offset: Supported 00:22:01.359 Transport SGL Data Block: Not Supported 00:22:01.359 Replay Protected Memory Block: Not Supported 00:22:01.359 00:22:01.359 Firmware Slot Information 00:22:01.359 ========================= 00:22:01.360 Active slot: 1 00:22:01.360 Slot 1 Firmware Revision: 25.01 00:22:01.360 00:22:01.360 00:22:01.360 Commands Supported and Effects 00:22:01.360 ============================== 00:22:01.360 Admin Commands 00:22:01.360 -------------- 00:22:01.360 Get Log Page (02h): Supported 00:22:01.360 Identify (06h): Supported 00:22:01.360 Abort (08h): Supported 00:22:01.360 Set Features (09h): Supported 00:22:01.360 Get Features (0Ah): Supported 00:22:01.360 Asynchronous Event Request (0Ch): Supported 00:22:01.360 Keep Alive (18h): Supported 00:22:01.360 I/O Commands 00:22:01.360 ------------ 00:22:01.360 Flush (00h): Supported LBA-Change 00:22:01.360 Write (01h): Supported LBA-Change 00:22:01.360 Read (02h): Supported 00:22:01.360 Compare (05h): Supported 00:22:01.360 Write Zeroes (08h): Supported LBA-Change 00:22:01.360 Dataset Management (09h): Supported LBA-Change 00:22:01.360 Copy (19h): Supported LBA-Change 00:22:01.360 00:22:01.360 Error Log 00:22:01.360 ========= 00:22:01.360 00:22:01.360 Arbitration 00:22:01.360 =========== 00:22:01.360 Arbitration Burst: 1 00:22:01.360 00:22:01.360 Power Management 00:22:01.360 ================ 00:22:01.360 Number of Power States: 1 00:22:01.360 Current Power State: Power State #0 00:22:01.360 Power State #0: 00:22:01.360 Max Power: 0.00 W 00:22:01.360 Non-Operational State: Operational 00:22:01.360 Entry Latency: Not Reported 00:22:01.360 Exit Latency: Not Reported 00:22:01.360 Relative Read Throughput: 0 00:22:01.360 Relative Read Latency: 0 00:22:01.360 Relative Write Throughput: 0 00:22:01.360 Relative Write Latency: 0 00:22:01.360 Idle Power: Not Reported 00:22:01.360 Active Power: Not Reported 00:22:01.360 Non-Operational Permissive Mode: Not Supported 00:22:01.360 00:22:01.360 Health Information 00:22:01.360 ================== 00:22:01.360 Critical Warnings: 00:22:01.360 Available Spare Space: OK 00:22:01.360 Temperature: OK 00:22:01.360 Device Reliability: OK 00:22:01.360 Read Only: No 00:22:01.360 Volatile Memory Backup: OK 00:22:01.360 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:01.360 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:01.360 Available Spare: 0% 00:22:01.360 Available Spare Threshold: 0% 00:22:01.360 Life Percentage Used:[2024-10-15 13:02:21.514782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.514787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11c0760) 00:22:01.360 [2024-10-15 13:02:21.514793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.360 [2024-10-15 13:02:21.514804] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220f00, cid 7, qid 0 00:22:01.360 [2024-10-15 13:02:21.514888] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.360 [2024-10-15 13:02:21.514894] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.360 [2024-10-15 13:02:21.514897] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.514901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220f00) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.514927] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:01.360 [2024-10-15 13:02:21.514936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220480) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.514942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.360 [2024-10-15 13:02:21.514947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220600) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.514951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.360 [2024-10-15 13:02:21.514956] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220780) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.514960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.360 [2024-10-15 13:02:21.514964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.514968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.360 [2024-10-15 13:02:21.514975] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.514979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.514982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.360 [2024-10-15 13:02:21.514988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.360 [2024-10-15 13:02:21.514999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.360 [2024-10-15 13:02:21.515064] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.360 [2024-10-15 13:02:21.515070] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.360 [2024-10-15 13:02:21.515073] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515076] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.515082] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.360 [2024-10-15 13:02:21.515097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.360 [2024-10-15 13:02:21.515108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.360 [2024-10-15 13:02:21.515185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.360 [2024-10-15 13:02:21.515191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.360 [2024-10-15 13:02:21.515194] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515197] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.360 [2024-10-15 13:02:21.515201] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:01.360 [2024-10-15 13:02:21.515205] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:01.360 [2024-10-15 13:02:21.515214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.360 [2024-10-15 13:02:21.515220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.360 [2024-10-15 13:02:21.515226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.360 [2024-10-15 13:02:21.515236] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.360 [2024-10-15 13:02:21.515302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.360 [2024-10-15 13:02:21.515308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.360 [2024-10-15 13:02:21.515311] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515432] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515444] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515447] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515524] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515533] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515537] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515758] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515864] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515867] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515875] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515879] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.515897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.515956] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.515962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.515965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.515977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.515986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.515991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.516059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.516065] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.516069] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.516080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.516093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.516178] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.516184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.516188] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.516199] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516206] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.516212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516221] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.516294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.516300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.516303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516306] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.516314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.516327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.516406] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.516412] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.516415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516418] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.516428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.516442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.516528] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.516534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.516537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.516548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.516555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.361 [2024-10-15 13:02:21.516561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.361 [2024-10-15 13:02:21.516570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.361 [2024-10-15 13:02:21.520610] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.361 [2024-10-15 13:02:21.520618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.361 [2024-10-15 13:02:21.520621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.520625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.361 [2024-10-15 13:02:21.520634] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.361 [2024-10-15 13:02:21.520639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.362 [2024-10-15 13:02:21.520642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c0760) 00:22:01.362 [2024-10-15 13:02:21.520648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.362 [2024-10-15 13:02:21.520659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1220900, cid 3, qid 0 00:22:01.362 [2024-10-15 13:02:21.520733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.362 [2024-10-15 13:02:21.520738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.362 [2024-10-15 13:02:21.520742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.362 [2024-10-15 13:02:21.520745] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1220900) on tqpair=0x11c0760 00:22:01.362 [2024-10-15 13:02:21.520751] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:01.362 0% 00:22:01.362 Data Units Read: 0 00:22:01.362 Data Units Written: 0 00:22:01.362 Host Read Commands: 0 00:22:01.362 Host Write Commands: 0 00:22:01.362 Controller Busy Time: 0 minutes 00:22:01.362 Power Cycles: 0 00:22:01.362 Power On Hours: 0 hours 00:22:01.362 Unsafe Shutdowns: 0 00:22:01.362 Unrecoverable Media Errors: 0 00:22:01.362 Lifetime Error Log Entries: 0 00:22:01.362 Warning Temperature Time: 0 minutes 00:22:01.362 Critical Temperature Time: 0 minutes 00:22:01.362 00:22:01.362 Number of Queues 00:22:01.362 ================ 00:22:01.362 Number of I/O Submission Queues: 127 00:22:01.362 Number of I/O Completion Queues: 127 00:22:01.362 00:22:01.362 Active Namespaces 00:22:01.362 ================= 00:22:01.362 Namespace ID:1 00:22:01.362 Error Recovery Timeout: Unlimited 00:22:01.362 Command Set Identifier: NVM (00h) 00:22:01.362 Deallocate: Supported 00:22:01.362 Deallocated/Unwritten Error: Not Supported 00:22:01.362 Deallocated Read Value: Unknown 00:22:01.362 Deallocate in Write Zeroes: Not Supported 00:22:01.362 Deallocated Guard Field: 0xFFFF 00:22:01.362 Flush: Supported 00:22:01.362 Reservation: Supported 00:22:01.362 Namespace Sharing Capabilities: Multiple Controllers 00:22:01.362 Size (in LBAs): 131072 (0GiB) 00:22:01.362 Capacity (in LBAs): 131072 (0GiB) 00:22:01.362 Utilization (in LBAs): 131072 (0GiB) 00:22:01.362 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:01.362 EUI64: ABCDEF0123456789 00:22:01.362 UUID: f211670a-54a8-4adf-800b-ca4217449a17 00:22:01.362 Thin Provisioning: Not Supported 00:22:01.362 Per-NS Atomic Units: Yes 00:22:01.362 Atomic Boundary Size (Normal): 0 00:22:01.362 Atomic Boundary Size (PFail): 0 00:22:01.362 Atomic Boundary Offset: 0 00:22:01.362 Maximum Single Source Range Length: 65535 00:22:01.362 Maximum Copy Length: 65535 00:22:01.362 Maximum Source Range Count: 1 00:22:01.362 NGUID/EUI64 Never Reused: No 00:22:01.362 Namespace Write Protected: No 00:22:01.362 Number of LBA Formats: 1 00:22:01.362 Current LBA Format: LBA Format #00 00:22:01.362 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:01.362 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.362 rmmod nvme_tcp 00:22:01.362 rmmod nvme_fabrics 00:22:01.362 rmmod nvme_keyring 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1293988 ']' 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1293988 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1293988 ']' 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1293988 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1293988 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1293988' 00:22:01.362 killing process with pid 1293988 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1293988 00:22:01.362 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1293988 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.622 13:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.158 00:22:04.158 real 0m9.327s 00:22:04.158 user 0m5.450s 00:22:04.158 sys 0m4.880s 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.158 ************************************ 00:22:04.158 END TEST nvmf_identify 00:22:04.158 ************************************ 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.158 ************************************ 00:22:04.158 START TEST nvmf_perf 00:22:04.158 ************************************ 00:22:04.158 13:02:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.158 * Looking for test storage... 00:22:04.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.158 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.159 --rc genhtml_branch_coverage=1 00:22:04.159 --rc genhtml_function_coverage=1 00:22:04.159 --rc genhtml_legend=1 00:22:04.159 --rc geninfo_all_blocks=1 00:22:04.159 --rc geninfo_unexecuted_blocks=1 00:22:04.159 00:22:04.159 ' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.159 --rc genhtml_branch_coverage=1 00:22:04.159 --rc genhtml_function_coverage=1 00:22:04.159 --rc genhtml_legend=1 00:22:04.159 --rc geninfo_all_blocks=1 00:22:04.159 --rc geninfo_unexecuted_blocks=1 00:22:04.159 00:22:04.159 ' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.159 --rc genhtml_branch_coverage=1 00:22:04.159 --rc genhtml_function_coverage=1 00:22:04.159 --rc genhtml_legend=1 00:22:04.159 --rc geninfo_all_blocks=1 00:22:04.159 --rc geninfo_unexecuted_blocks=1 00:22:04.159 00:22:04.159 ' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.159 --rc genhtml_branch_coverage=1 00:22:04.159 --rc genhtml_function_coverage=1 00:22:04.159 --rc genhtml_legend=1 00:22:04.159 --rc geninfo_all_blocks=1 00:22:04.159 --rc geninfo_unexecuted_blocks=1 00:22:04.159 00:22:04.159 ' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:04.159 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:04.160 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.160 13:02:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.730 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.730 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.730 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.730 13:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.730 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:22:10.731 00:22:10.731 --- 10.0.0.2 ping statistics --- 00:22:10.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.731 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:10.731 00:22:10.731 --- 10.0.0.1 ping statistics --- 00:22:10.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.731 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1297727 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1297727 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1297727 ']' 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.731 [2024-10-15 13:02:30.224689] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:10.731 [2024-10-15 13:02:30.224735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.731 [2024-10-15 13:02:30.295666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.731 [2024-10-15 13:02:30.337863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.731 [2024-10-15 13:02:30.337900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.731 [2024-10-15 13:02:30.337907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.731 [2024-10-15 13:02:30.337913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.731 [2024-10-15 13:02:30.337919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.731 [2024-10-15 13:02:30.339473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.731 [2024-10-15 13:02:30.339583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.731 [2024-10-15 13:02:30.339693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.731 [2024-10-15 13:02:30.339694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:10.731 13:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:13.264 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:13.264 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:13.523 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:13.523 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:13.781 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:13.781 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:13.781 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:13.781 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:13.781 13:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:13.781 [2024-10-15 13:02:34.100376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.040 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.040 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:14.040 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.298 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:14.298 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:14.557 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.816 [2024-10-15 13:02:34.916777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.816 13:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:15.075 13:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:15.075 13:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:15.075 13:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:15.075 13:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:16.452 Initializing NVMe Controllers 00:22:16.452 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:16.452 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:16.452 Initialization complete. Launching workers. 00:22:16.452 ======================================================== 00:22:16.452 Latency(us) 00:22:16.452 Device Information : IOPS MiB/s Average min max 00:22:16.452 PCIE (0000:5e:00.0) NSID 1 from core 0: 97735.00 381.78 327.04 34.02 4548.02 00:22:16.452 ======================================================== 00:22:16.452 Total : 97735.00 381.78 327.04 34.02 4548.02 00:22:16.452 00:22:16.452 13:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:17.507 Initializing NVMe Controllers 00:22:17.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.507 Initialization complete. Launching workers. 00:22:17.507 ======================================================== 00:22:17.507 Latency(us) 00:22:17.507 Device Information : IOPS MiB/s Average min max 00:22:17.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 268.00 1.05 3808.48 118.93 44816.67 00:22:17.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 67.00 0.26 15115.28 7213.47 47897.25 00:22:17.507 ======================================================== 00:22:17.507 Total : 335.00 1.31 6069.84 118.93 47897.25 00:22:17.507 00:22:17.507 13:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:18.884 Initializing NVMe Controllers 00:22:18.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.884 Initialization complete. Launching workers. 00:22:18.884 ======================================================== 00:22:18.884 Latency(us) 00:22:18.884 Device Information : IOPS MiB/s Average min max 00:22:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11137.71 43.51 2872.80 425.92 6335.91 00:22:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.90 15.03 8364.53 7172.08 16187.48 00:22:18.884 ======================================================== 00:22:18.884 Total : 14986.61 58.54 4283.20 425.92 16187.48 00:22:18.884 00:22:18.884 13:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:18.884 13:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:18.884 13:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:21.419 Initializing NVMe Controllers 00:22:21.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.419 Controller IO queue size 128, less than required. 00:22:21.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.419 Controller IO queue size 128, less than required. 00:22:21.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.419 Initialization complete. Launching workers. 00:22:21.419 ======================================================== 00:22:21.419 Latency(us) 00:22:21.419 Device Information : IOPS MiB/s Average min max 00:22:21.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1821.90 455.47 71297.95 48997.21 142607.50 00:22:21.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.47 150.37 229049.49 74333.68 371091.56 00:22:21.419 ======================================================== 00:22:21.419 Total : 2423.37 605.84 110451.05 48997.21 371091.56 00:22:21.419 00:22:21.419 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:21.419 No valid NVMe controllers or AIO or URING devices found 00:22:21.419 Initializing NVMe Controllers 00:22:21.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.419 Controller IO queue size 128, less than required. 00:22:21.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.419 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:21.419 Controller IO queue size 128, less than required. 00:22:21.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.419 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:21.419 WARNING: Some requested NVMe devices were skipped 00:22:21.678 13:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:24.213 Initializing NVMe Controllers 00:22:24.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.213 Controller IO queue size 128, less than required. 00:22:24.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:24.213 Controller IO queue size 128, less than required. 00:22:24.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:24.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.213 Initialization complete. Launching workers. 00:22:24.213 00:22:24.213 ==================== 00:22:24.213 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:24.213 TCP transport: 00:22:24.213 polls: 11101 00:22:24.213 idle_polls: 7637 00:22:24.213 sock_completions: 3464 00:22:24.213 nvme_completions: 6407 00:22:24.213 submitted_requests: 9672 00:22:24.213 queued_requests: 1 00:22:24.213 00:22:24.213 ==================== 00:22:24.213 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:24.213 TCP transport: 00:22:24.213 polls: 11219 00:22:24.213 idle_polls: 7674 00:22:24.213 sock_completions: 3545 00:22:24.213 nvme_completions: 6539 00:22:24.213 submitted_requests: 9794 00:22:24.213 queued_requests: 1 00:22:24.213 ======================================================== 00:22:24.214 Latency(us) 00:22:24.214 Device Information : IOPS MiB/s Average min max 00:22:24.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1601.39 400.35 81938.72 46096.92 143973.71 00:22:24.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1634.38 408.60 79824.79 47426.69 134641.92 00:22:24.214 ======================================================== 00:22:24.214 Total : 3235.77 808.94 80870.98 46096.92 143973.71 00:22:24.214 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.214 rmmod nvme_tcp 00:22:24.214 rmmod nvme_fabrics 00:22:24.214 rmmod nvme_keyring 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1297727 ']' 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1297727 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1297727 ']' 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1297727 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.214 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1297727 00:22:24.473 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:24.473 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:24.473 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1297727' 00:22:24.473 killing process with pid 1297727 00:22:24.473 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1297727 00:22:24.473 13:02:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1297727 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.381 13:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.917 00:22:28.917 real 0m24.709s 00:22:28.917 user 1m4.628s 00:22:28.917 sys 0m8.415s 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:28.917 ************************************ 00:22:28.917 END TEST nvmf_perf 00:22:28.917 ************************************ 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.917 ************************************ 00:22:28.917 START TEST nvmf_fio_host 00:22:28.917 ************************************ 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:28.917 * Looking for test storage... 00:22:28.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:28.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.917 --rc genhtml_branch_coverage=1 00:22:28.917 --rc genhtml_function_coverage=1 00:22:28.917 --rc genhtml_legend=1 00:22:28.917 --rc geninfo_all_blocks=1 00:22:28.917 --rc geninfo_unexecuted_blocks=1 00:22:28.917 00:22:28.917 ' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:28.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.917 --rc genhtml_branch_coverage=1 00:22:28.917 --rc genhtml_function_coverage=1 00:22:28.917 --rc genhtml_legend=1 00:22:28.917 --rc geninfo_all_blocks=1 00:22:28.917 --rc geninfo_unexecuted_blocks=1 00:22:28.917 00:22:28.917 ' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:28.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.917 --rc genhtml_branch_coverage=1 00:22:28.917 --rc genhtml_function_coverage=1 00:22:28.917 --rc genhtml_legend=1 00:22:28.917 --rc geninfo_all_blocks=1 00:22:28.917 --rc geninfo_unexecuted_blocks=1 00:22:28.917 00:22:28.917 ' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:28.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.917 --rc genhtml_branch_coverage=1 00:22:28.917 --rc genhtml_function_coverage=1 00:22:28.917 --rc genhtml_legend=1 00:22:28.917 --rc geninfo_all_blocks=1 00:22:28.917 --rc geninfo_unexecuted_blocks=1 00:22:28.917 00:22:28.917 ' 00:22:28.917 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.918 13:02:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.488 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:35.489 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:35.489 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:35.489 Found net devices under 0000:86:00.0: cvl_0_0 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:35.489 Found net devices under 0000:86:00.1: cvl_0_1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:22:35.489 00:22:35.489 --- 10.0.0.2 ping statistics --- 00:22:35.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.489 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:35.489 00:22:35.489 --- 10.0.0.1 ping statistics --- 00:22:35.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.489 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1303858 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1303858 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1303858 ']' 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.489 13:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.489 [2024-10-15 13:02:54.939650] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:35.489 [2024-10-15 13:02:54.939691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.489 [2024-10-15 13:02:55.014218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.489 [2024-10-15 13:02:55.056641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.489 [2024-10-15 13:02:55.056674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.489 [2024-10-15 13:02:55.056681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.489 [2024-10-15 13:02:55.056687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.489 [2024-10-15 13:02:55.056692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.489 [2024-10-15 13:02:55.058248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.489 [2024-10-15 13:02:55.058274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.489 [2024-10-15 13:02:55.058290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.489 [2024-10-15 13:02:55.058294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:35.489 [2024-10-15 13:02:55.315137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:35.489 Malloc1 00:22:35.489 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.748 13:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:35.748 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.007 [2024-10-15 13:02:56.187893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.007 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:36.266 13:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.524 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:36.524 fio-3.35 00:22:36.524 Starting 1 thread 00:22:39.053 00:22:39.053 test: (groupid=0, jobs=1): err= 0: pid=1304238: Tue Oct 15 13:02:59 2024 00:22:39.053 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec) 00:22:39.053 slat (nsec): min=1539, max=241975, avg=1744.12, stdev=2212.16 00:22:39.053 clat (usec): min=3105, max=10206, avg=5975.52, stdev=448.49 00:22:39.053 lat (usec): min=3144, max=10207, avg=5977.26, stdev=448.43 00:22:39.053 clat percentiles (usec): 00:22:39.053 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:39.053 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:22:39.053 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:39.053 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 8848], 00:22:39.053 | 99.99th=[ 9634] 00:22:39.053 bw ( KiB/s): min=46512, max=47912, per=99.98%, avg=47332.00, stdev=647.57, samples=4 00:22:39.053 iops : min=11628, max=11978, avg=11833.00, stdev=161.89, samples=4 00:22:39.053 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec); 0 zone resets 00:22:39.053 slat (nsec): min=1567, max=225620, avg=1809.04, stdev=1663.38 00:22:39.053 clat (usec): min=2429, max=9494, avg=4832.16, stdev=387.20 00:22:39.053 lat (usec): min=2444, max=9496, avg=4833.97, stdev=387.26 00:22:39.053 clat percentiles (usec): 00:22:39.053 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:22:39.053 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:39.053 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:39.053 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 8094], 99.95th=[ 8848], 00:22:39.053 | 99.99th=[ 9503] 00:22:39.053 bw ( KiB/s): min=46528, max=47488, per=99.98%, avg=47104.00, stdev=414.77, samples=4 00:22:39.053 iops : min=11632, max=11872, avg=11776.00, stdev=103.69, samples=4 00:22:39.053 lat (msec) : 4=0.68%, 10=99.32%, 20=0.01% 00:22:39.053 cpu : usr=76.50%, sys=22.55%, ctx=81, majf=0, minf=3 00:22:39.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:39.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.053 issued rwts: total=23730,23615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.053 00:22:39.053 Run status group 0 (all jobs): 00:22:39.053 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:22:39.053 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2005-2005msec 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:39.053 13:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.311 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:39.311 fio-3.35 00:22:39.311 Starting 1 thread 00:22:41.842 00:22:41.842 test: (groupid=0, jobs=1): err= 0: pid=1304806: Tue Oct 15 13:03:01 2024 00:22:41.842 read: IOPS=10.9k, BW=170MiB/s (179MB/s)(342MiB/2006msec) 00:22:41.842 slat (nsec): min=2522, max=98299, avg=2891.01, stdev=1347.85 00:22:41.842 clat (usec): min=1345, max=12529, avg=6764.70, stdev=1624.75 00:22:41.842 lat (usec): min=1348, max=12532, avg=6767.59, stdev=1624.87 00:22:41.842 clat percentiles (usec): 00:22:41.842 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:22:41.842 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7177], 00:22:41.842 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[ 9503], 00:22:41.842 | 99.00th=[10945], 99.50th=[11207], 99.90th=[11863], 99.95th=[11994], 00:22:41.842 | 99.99th=[11994] 00:22:41.842 bw ( KiB/s): min=82016, max=96832, per=50.32%, avg=87816.00, stdev=6564.88, samples=4 00:22:41.842 iops : min= 5126, max= 6052, avg=5488.50, stdev=410.31, samples=4 00:22:41.842 write: IOPS=6406, BW=100MiB/s (105MB/s)(180MiB/1797msec); 0 zone resets 00:22:41.842 slat (usec): min=29, max=379, avg=32.35, stdev= 6.93 00:22:41.842 clat (usec): min=4055, max=13509, avg=8652.69, stdev=1444.52 00:22:41.842 lat (usec): min=4089, max=13619, avg=8685.04, stdev=1445.70 00:22:41.842 clat percentiles (usec): 00:22:41.842 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:22:41.842 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:41.842 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:22:41.842 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13173], 99.95th=[13304], 00:22:41.842 | 99.99th=[13435] 00:22:41.842 bw ( KiB/s): min=86592, max=100800, per=89.43%, avg=91664.00, stdev=6469.89, samples=4 00:22:41.842 iops : min= 5412, max= 6300, avg=5729.00, stdev=404.37, samples=4 00:22:41.842 lat (msec) : 2=0.01%, 4=1.64%, 10=89.95%, 20=8.40% 00:22:41.842 cpu : usr=86.94%, sys=12.36%, ctx=46, majf=0, minf=4 00:22:41.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:41.842 issued rwts: total=21882,11512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:41.842 00:22:41.842 Run status group 0 (all jobs): 00:22:41.842 READ: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=342MiB (359MB), run=2006-2006msec 00:22:41.842 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=180MiB (189MB), run=1797-1797msec 00:22:41.842 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.843 rmmod nvme_tcp 00:22:41.843 rmmod nvme_fabrics 00:22:41.843 rmmod nvme_keyring 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1303858 ']' 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1303858 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1303858 ']' 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1303858 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.843 13:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1303858 00:22:41.843 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.843 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.843 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1303858' 00:22:41.843 killing process with pid 1303858 00:22:41.843 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1303858 00:22:41.843 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1303858 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.102 13:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.007 00:22:44.007 real 0m15.500s 00:22:44.007 user 0m46.065s 00:22:44.007 sys 0m6.320s 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.007 ************************************ 00:22:44.007 END TEST nvmf_fio_host 00:22:44.007 ************************************ 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.007 13:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.268 ************************************ 00:22:44.268 START TEST nvmf_failover 00:22:44.268 ************************************ 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:44.268 * Looking for test storage... 00:22:44.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.268 --rc genhtml_branch_coverage=1 00:22:44.268 --rc genhtml_function_coverage=1 00:22:44.268 --rc genhtml_legend=1 00:22:44.268 --rc geninfo_all_blocks=1 00:22:44.268 --rc geninfo_unexecuted_blocks=1 00:22:44.268 00:22:44.268 ' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.268 --rc genhtml_branch_coverage=1 00:22:44.268 --rc genhtml_function_coverage=1 00:22:44.268 --rc genhtml_legend=1 00:22:44.268 --rc geninfo_all_blocks=1 00:22:44.268 --rc geninfo_unexecuted_blocks=1 00:22:44.268 00:22:44.268 ' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.268 --rc genhtml_branch_coverage=1 00:22:44.268 --rc genhtml_function_coverage=1 00:22:44.268 --rc genhtml_legend=1 00:22:44.268 --rc geninfo_all_blocks=1 00:22:44.268 --rc geninfo_unexecuted_blocks=1 00:22:44.268 00:22:44.268 ' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.268 --rc genhtml_branch_coverage=1 00:22:44.268 --rc genhtml_function_coverage=1 00:22:44.268 --rc genhtml_legend=1 00:22:44.268 --rc geninfo_all_blocks=1 00:22:44.268 --rc geninfo_unexecuted_blocks=1 00:22:44.268 00:22:44.268 ' 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.268 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.269 13:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.998 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.999 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.999 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.999 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.999 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:22:50.999 00:22:50.999 --- 10.0.0.2 ping statistics --- 00:22:50.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.999 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:50.999 00:22:50.999 --- 10.0.0.1 ping statistics --- 00:22:50.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.999 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1309297 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1309297 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1309297 ']' 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.999 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.000 [2024-10-15 13:03:10.566281] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:22:51.000 [2024-10-15 13:03:10.566325] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.000 [2024-10-15 13:03:10.636347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:51.000 [2024-10-15 13:03:10.677884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.000 [2024-10-15 13:03:10.677920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.000 [2024-10-15 13:03:10.677927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.000 [2024-10-15 13:03:10.677934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.000 [2024-10-15 13:03:10.677940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.000 [2024-10-15 13:03:10.679318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.000 [2024-10-15 13:03:10.679424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.000 [2024-10-15 13:03:10.679425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.000 13:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:51.000 [2024-10-15 13:03:10.978856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.000 13:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:51.000 Malloc0 00:22:51.000 13:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.258 13:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.517 13:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.517 [2024-10-15 13:03:11.802097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.517 13:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:51.776 [2024-10-15 13:03:11.994594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:51.776 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:52.034 [2024-10-15 13:03:12.235401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1309562 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1309562 /var/tmp/bdevperf.sock 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1309562 ']' 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.034 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:52.293 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.293 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:52.293 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:52.552 NVMe0n1 00:22:52.552 13:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:53.118 00:22:53.118 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1309788 00:22:53.118 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.118 13:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:54.052 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.311 [2024-10-15 13:03:14.451950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17390 is same with the state(6) to be set 00:22:54.311 [2024-10-15 13:03:14.452027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17390 is same with the state(6) to be set 00:22:54.311 [2024-10-15 13:03:14.452036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17390 is same with the state(6) to be set 00:22:54.311 [2024-10-15 13:03:14.452043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17390 is same with the state(6) to be set 00:22:54.311 13:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:57.597 13:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.597 00:22:57.597 13:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.858 [2024-10-15 13:03:17.950195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.858 [2024-10-15 13:03:17.950237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 [2024-10-15 13:03:17.950277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18190 is same with the state(6) to be set 00:22:57.859 13:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:01.145 13:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.146 [2024-10-15 13:03:21.158169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.146 13:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:02.081 13:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:02.081 [2024-10-15 13:03:22.371667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.371998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 [2024-10-15 13:03:22.372078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ef0 is same with the state(6) to be set 00:23:02.081 13:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1309788 00:23:08.652 { 00:23:08.652 "results": [ 00:23:08.652 { 00:23:08.652 "job": "NVMe0n1", 00:23:08.652 "core_mask": "0x1", 00:23:08.652 "workload": "verify", 00:23:08.652 "status": "finished", 00:23:08.652 "verify_range": { 00:23:08.652 "start": 0, 00:23:08.652 "length": 16384 00:23:08.652 }, 00:23:08.652 "queue_depth": 128, 00:23:08.652 "io_size": 4096, 00:23:08.652 "runtime": 15.006056, 00:23:08.652 "iops": 11190.1488305788, 00:23:08.652 "mibps": 43.71151886944844, 00:23:08.652 "io_failed": 12005, 00:23:08.652 "io_timeout": 0, 00:23:08.652 "avg_latency_us": 10653.687522732354, 00:23:08.652 "min_latency_us": 423.25333333333333, 00:23:08.652 "max_latency_us": 30333.805714285714 00:23:08.652 } 00:23:08.652 ], 00:23:08.652 "core_count": 1 00:23:08.652 } 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1309562 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1309562 ']' 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1309562 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309562 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.652 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309562' 00:23:08.652 killing process with pid 1309562 00:23:08.653 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1309562 00:23:08.653 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1309562 00:23:08.653 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.653 [2024-10-15 13:03:12.311710] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:23:08.653 [2024-10-15 13:03:12.311766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309562 ] 00:23:08.653 [2024-10-15 13:03:12.377974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.653 [2024-10-15 13:03:12.419234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.653 Running I/O for 15 seconds... 00:23:08.653 11212.00 IOPS, 43.80 MiB/s [2024-10-15T11:03:28.972Z] [2024-10-15 13:03:14.453079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.653 [2024-10-15 13:03:14.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.653 [2024-10-15 13:03:14.453689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.654 [2024-10-15 13:03:14.453703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.453987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.654 [2024-10-15 13:03:14.454303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.654 [2024-10-15 13:03:14.454311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.655 [2024-10-15 13:03:14.454659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:23:08.655 [2024-10-15 13:03:14.454894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.655 [2024-10-15 13:03:14.454900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.655 [2024-10-15 13:03:14.454905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.655 [2024-10-15 13:03:14.454911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.454917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.454923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.454929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.454934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.454940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.454947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.454951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.454956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.454963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.454969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.454974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.454981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.454989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.454996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.656 [2024-10-15 13:03:14.455233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.656 [2024-10-15 13:03:14.455238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:23:08.656 [2024-10-15 13:03:14.455244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455284] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b74e0 was disconnected and freed. reset controller. 00:23:08.656 [2024-10-15 13:03:14.455292] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:08.656 [2024-10-15 13:03:14.455312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.656 [2024-10-15 13:03:14.455319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.656 [2024-10-15 13:03:14.455332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.656 [2024-10-15 13:03:14.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.656 [2024-10-15 13:03:14.455359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:14.455365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.656 [2024-10-15 13:03:14.458118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.656 [2024-10-15 13:03:14.458145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2094400 (9): Bad file descriptor 00:23:08.656 [2024-10-15 13:03:14.489945] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.656 11113.00 IOPS, 43.41 MiB/s [2024-10-15T11:03:28.975Z] 11239.67 IOPS, 43.90 MiB/s [2024-10-15T11:03:28.975Z] 11268.25 IOPS, 44.02 MiB/s [2024-10-15T11:03:28.975Z] [2024-10-15 13:03:17.951685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.656 [2024-10-15 13:03:17.951723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.656 [2024-10-15 13:03:17.951878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.656 [2024-10-15 13:03:17.951884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.951990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.657 [2024-10-15 13:03:17.952011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.657 [2024-10-15 13:03:17.952365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.657 [2024-10-15 13:03:17.952371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.658 [2024-10-15 13:03:17.952965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.658 [2024-10-15 13:03:17.952971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.952979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.952986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.952993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.659 [2024-10-15 13:03:17.953071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35224 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35232 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35240 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35248 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35256 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35264 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35272 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35280 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35288 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35296 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35304 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35312 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35320 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35336 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35344 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35352 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35360 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35368 len:8 PRP1 0x0 PRP2 0x0 00:23:08.659 [2024-10-15 13:03:17.953523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.659 [2024-10-15 13:03:17.953531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.659 [2024-10-15 13:03:17.953536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.659 [2024-10-15 13:03:17.953541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35376 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35384 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35392 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35400 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35408 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35416 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35432 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35448 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.953830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.953836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.953841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.953846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.963980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.963995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.964003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.964010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.964018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.964034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.964041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.964050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.660 [2024-10-15 13:03:17.964067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.660 [2024-10-15 13:03:17.964074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:23:08.660 [2024-10-15 13:03:17.964082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964128] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21edbd0 was disconnected and freed. reset controller. 00:23:08.660 [2024-10-15 13:03:17.964141] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:08.660 [2024-10-15 13:03:17.964165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.660 [2024-10-15 13:03:17.964175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.660 [2024-10-15 13:03:17.964194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.660 [2024-10-15 13:03:17.964212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.660 [2024-10-15 13:03:17.964230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:17.964238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.660 [2024-10-15 13:03:17.964264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2094400 (9): Bad file descriptor 00:23:08.660 [2024-10-15 13:03:17.967977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.660 [2024-10-15 13:03:18.043622] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.660 11099.40 IOPS, 43.36 MiB/s [2024-10-15T11:03:28.979Z] 11144.00 IOPS, 43.53 MiB/s [2024-10-15T11:03:28.979Z] 11153.43 IOPS, 43.57 MiB/s [2024-10-15T11:03:28.979Z] 11200.38 IOPS, 43.75 MiB/s [2024-10-15T11:03:28.979Z] 11221.44 IOPS, 43.83 MiB/s [2024-10-15T11:03:28.979Z] [2024-10-15 13:03:22.373102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.660 [2024-10-15 13:03:22.373136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:22.373151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.660 [2024-10-15 13:03:22.373159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:22.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.660 [2024-10-15 13:03:22.373175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:22.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.660 [2024-10-15 13:03:22.373191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:22.373199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.660 [2024-10-15 13:03:22.373205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.660 [2024-10-15 13:03:22.373214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.661 [2024-10-15 13:03:22.373497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.661 [2024-10-15 13:03:22.373753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.661 [2024-10-15 13:03:22.373759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.373987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.662 [2024-10-15 13:03:22.373994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.662 [2024-10-15 13:03:22.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.662 [2024-10-15 13:03:22.374367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.663 [2024-10-15 13:03:22.374472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.663 [2024-10-15 13:03:22.374705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.374732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62696 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.374739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.663 [2024-10-15 13:03:22.374786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.663 [2024-10-15 13:03:22.374800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.663 [2024-10-15 13:03:22.374816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.663 [2024-10-15 13:03:22.374830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.374836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2094400 is same with the state(6) to be set 00:23:08.663 [2024-10-15 13:03:22.375010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62704 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62712 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62720 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62728 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62736 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62184 len:8 PRP1 0x0 PRP2 0x0 00:23:08.663 [2024-10-15 13:03:22.375204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.663 [2024-10-15 13:03:22.375210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.663 [2024-10-15 13:03:22.375215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.663 [2024-10-15 13:03:22.375220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62192 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62200 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62208 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62216 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62224 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62232 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.375348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.375356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.375362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.375367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62240 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.385874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.385880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.385886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62248 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.385901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.385906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.385912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62256 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.385926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.385933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.385938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62264 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.385953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.385958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.385964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62272 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.385978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.385983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.385989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62280 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.385996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62288 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62296 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62304 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61752 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61760 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61768 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61776 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61784 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61792 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61800 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.664 [2024-10-15 13:03:22.386323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61808 len:8 PRP1 0x0 PRP2 0x0 00:23:08.664 [2024-10-15 13:03:22.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.664 [2024-10-15 13:03:22.386337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.664 [2024-10-15 13:03:22.386343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61816 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61824 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61832 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61840 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61848 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61856 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61864 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61872 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61880 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61888 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61904 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61912 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61920 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62312 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62320 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62328 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62336 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62344 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62352 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62360 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.665 [2024-10-15 13:03:22.386898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62368 len:8 PRP1 0x0 PRP2 0x0 00:23:08.665 [2024-10-15 13:03:22.386905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.665 [2024-10-15 13:03:22.386912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.665 [2024-10-15 13:03:22.386917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62376 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.386930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.386937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.386943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.386948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62384 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.386955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.386963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.386971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.386977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62392 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.386984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.386991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.386996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62400 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62408 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62416 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62424 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62432 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62440 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62448 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62456 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62464 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62472 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62480 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62488 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62496 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62504 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62512 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62520 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.387426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.387433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.387440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.387446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62536 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.394695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.394706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.394713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62544 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.394731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.394741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.394748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62552 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.394765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.394775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.394782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62560 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.394799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.394809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.394816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62568 len:8 PRP1 0x0 PRP2 0x0 00:23:08.666 [2024-10-15 13:03:22.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.666 [2024-10-15 13:03:22.394843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.666 [2024-10-15 13:03:22.394850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.666 [2024-10-15 13:03:22.394858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61928 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.394868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.394878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.394886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.394894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61936 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.394903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.394913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.394920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.394927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61944 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.394937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.394946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.394953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.394964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61952 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.394982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.394990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.394997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61960 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61968 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61976 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61984 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61992 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62000 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62008 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62016 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62024 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62032 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62040 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62048 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62056 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62064 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62072 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62080 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62088 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62104 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62112 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62120 len:8 PRP1 0x0 PRP2 0x0 00:23:08.667 [2024-10-15 13:03:22.395696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.667 [2024-10-15 13:03:22.395705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.667 [2024-10-15 13:03:22.395712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.667 [2024-10-15 13:03:22.395721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62128 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62136 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62144 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62152 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62160 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62168 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62176 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62576 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.395969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.395979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.395986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.395994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62584 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62592 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62600 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62608 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62632 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62640 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62648 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62656 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62664 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62672 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62680 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62688 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.668 [2024-10-15 13:03:22.396465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.668 [2024-10-15 13:03:22.396473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62696 len:8 PRP1 0x0 PRP2 0x0 00:23:08.668 [2024-10-15 13:03:22.396481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.668 [2024-10-15 13:03:22.396530] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21eec30 was disconnected and freed. reset controller. 00:23:08.668 [2024-10-15 13:03:22.396542] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:08.668 [2024-10-15 13:03:22.396552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.668 [2024-10-15 13:03:22.396591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2094400 (9): Bad file descriptor 00:23:08.668 [2024-10-15 13:03:22.400563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.668 [2024-10-15 13:03:22.557207] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.668 11047.90 IOPS, 43.16 MiB/s [2024-10-15T11:03:28.987Z] 11086.09 IOPS, 43.31 MiB/s [2024-10-15T11:03:28.987Z] 11129.67 IOPS, 43.48 MiB/s [2024-10-15T11:03:28.987Z] 11155.00 IOPS, 43.57 MiB/s [2024-10-15T11:03:28.987Z] 11172.64 IOPS, 43.64 MiB/s [2024-10-15T11:03:28.987Z] 11189.87 IOPS, 43.71 MiB/s 00:23:08.668 Latency(us) 00:23:08.668 [2024-10-15T11:03:28.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:08.669 Verification LBA range: start 0x0 length 0x4000 00:23:08.669 NVMe0n1 : 15.01 11190.15 43.71 800.01 0.00 10653.69 423.25 30333.81 00:23:08.669 [2024-10-15T11:03:28.988Z] =================================================================================================================== 00:23:08.669 [2024-10-15T11:03:28.988Z] Total : 11190.15 43.71 800.01 0.00 10653.69 423.25 30333.81 00:23:08.669 Received shutdown signal, test time was about 15.000000 seconds 00:23:08.669 00:23:08.669 Latency(us) 00:23:08.669 [2024-10-15T11:03:28.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.669 [2024-10-15T11:03:28.988Z] =================================================================================================================== 00:23:08.669 [2024-10-15T11:03:28.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1312309 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1312309 /var/tmp/bdevperf.sock 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1312309 ']' 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:08.669 13:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:08.926 [2024-10-15 13:03:29.031249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:08.926 13:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:08.926 [2024-10-15 13:03:29.215726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:08.926 13:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.492 NVMe0n1 00:23:09.492 13:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.750 00:23:09.750 13:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:10.316 00:23:10.316 13:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.317 13:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:10.317 13:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.575 13:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:13.856 13:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:13.856 13:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:13.856 13:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1313073 00:23:13.856 13:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.856 13:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1313073 00:23:14.792 { 00:23:14.792 "results": [ 00:23:14.792 { 00:23:14.792 "job": "NVMe0n1", 00:23:14.792 "core_mask": "0x1", 00:23:14.792 "workload": "verify", 00:23:14.792 "status": "finished", 00:23:14.792 "verify_range": { 00:23:14.792 "start": 0, 00:23:14.792 "length": 16384 00:23:14.792 }, 00:23:14.792 "queue_depth": 128, 00:23:14.792 "io_size": 4096, 00:23:14.792 "runtime": 1.003418, 00:23:14.792 "iops": 11672.104745978246, 00:23:14.792 "mibps": 45.59415916397752, 00:23:14.792 "io_failed": 0, 00:23:14.792 "io_timeout": 0, 00:23:14.792 "avg_latency_us": 10920.994306531356, 00:23:14.792 "min_latency_us": 912.8228571428572, 00:23:14.792 "max_latency_us": 9549.531428571428 00:23:14.792 } 00:23:14.792 ], 00:23:14.792 "core_count": 1 00:23:14.792 } 00:23:14.792 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.792 [2024-10-15 13:03:28.670094] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:23:14.792 [2024-10-15 13:03:28.670148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312309 ] 00:23:14.792 [2024-10-15 13:03:28.736564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.792 [2024-10-15 13:03:28.773861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.792 [2024-10-15 13:03:30.719736] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:14.792 [2024-10-15 13:03:30.719782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.792 [2024-10-15 13:03:30.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.792 [2024-10-15 13:03:30.719801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.792 [2024-10-15 13:03:30.719808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.792 [2024-10-15 13:03:30.719816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.792 [2024-10-15 13:03:30.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.792 [2024-10-15 13:03:30.719834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.792 [2024-10-15 13:03:30.719840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.792 [2024-10-15 13:03:30.719847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.792 [2024-10-15 13:03:30.719871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.792 [2024-10-15 13:03:30.719885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e7400 (9): Bad file descriptor 00:23:14.792 [2024-10-15 13:03:30.770579] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:14.792 Running I/O for 1 seconds... 00:23:14.792 11584.00 IOPS, 45.25 MiB/s 00:23:14.792 Latency(us) 00:23:14.792 [2024-10-15T11:03:35.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.792 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:14.792 Verification LBA range: start 0x0 length 0x4000 00:23:14.792 NVMe0n1 : 1.00 11672.10 45.59 0.00 0.00 10920.99 912.82 9549.53 00:23:14.792 [2024-10-15T11:03:35.111Z] =================================================================================================================== 00:23:14.792 [2024-10-15T11:03:35.111Z] Total : 11672.10 45.59 0.00 0.00 10920.99 912.82 9549.53 00:23:14.792 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.792 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:15.050 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.308 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.308 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:15.566 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.566 13:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:18.849 13:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.849 13:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1312309 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1312309 ']' 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1312309 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1312309 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1312309' 00:23:18.849 killing process with pid 1312309 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1312309 00:23:18.849 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1312309 00:23:19.107 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:19.107 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.366 rmmod nvme_tcp 00:23:19.366 rmmod nvme_fabrics 00:23:19.366 rmmod nvme_keyring 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1309297 ']' 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1309297 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1309297 ']' 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1309297 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309297 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309297' 00:23:19.366 killing process with pid 1309297 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1309297 00:23:19.366 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1309297 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.626 13:03:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.164 00:23:22.164 real 0m37.525s 00:23:22.164 user 1m58.772s 00:23:22.164 sys 0m7.967s 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.164 ************************************ 00:23:22.164 END TEST nvmf_failover 00:23:22.164 ************************************ 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.164 ************************************ 00:23:22.164 START TEST nvmf_host_discovery 00:23:22.164 ************************************ 00:23:22.164 13:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.164 * Looking for test storage... 00:23:22.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:22.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.164 --rc genhtml_branch_coverage=1 00:23:22.164 --rc genhtml_function_coverage=1 00:23:22.164 --rc genhtml_legend=1 00:23:22.164 --rc geninfo_all_blocks=1 00:23:22.164 --rc geninfo_unexecuted_blocks=1 00:23:22.164 00:23:22.164 ' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:22.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.164 --rc genhtml_branch_coverage=1 00:23:22.164 --rc genhtml_function_coverage=1 00:23:22.164 --rc genhtml_legend=1 00:23:22.164 --rc geninfo_all_blocks=1 00:23:22.164 --rc geninfo_unexecuted_blocks=1 00:23:22.164 00:23:22.164 ' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:22.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.164 --rc genhtml_branch_coverage=1 00:23:22.164 --rc genhtml_function_coverage=1 00:23:22.164 --rc genhtml_legend=1 00:23:22.164 --rc geninfo_all_blocks=1 00:23:22.164 --rc geninfo_unexecuted_blocks=1 00:23:22.164 00:23:22.164 ' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:22.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.164 --rc genhtml_branch_coverage=1 00:23:22.164 --rc genhtml_function_coverage=1 00:23:22.164 --rc genhtml_legend=1 00:23:22.164 --rc geninfo_all_blocks=1 00:23:22.164 --rc geninfo_unexecuted_blocks=1 00:23:22.164 00:23:22.164 ' 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.164 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.165 13:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:28.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:28.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:28.738 Found net devices under 0000:86:00.0: cvl_0_0 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:28.738 Found net devices under 0000:86:00.1: cvl_0_1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.738 13:03:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.738 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.738 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.738 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.738 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:23:28.738 00:23:28.738 --- 10.0.0.2 ping statistics --- 00:23:28.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.738 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:23:28.738 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:28.738 00:23:28.738 --- 10.0.0.1 ping statistics --- 00:23:28.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.739 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1317471 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1317471 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1317471 ']' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 [2024-10-15 13:03:48.156536] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:23:28.739 [2024-10-15 13:03:48.156580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.739 [2024-10-15 13:03:48.229575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.739 [2024-10-15 13:03:48.271635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.739 [2024-10-15 13:03:48.271666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.739 [2024-10-15 13:03:48.271673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.739 [2024-10-15 13:03:48.271679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.739 [2024-10-15 13:03:48.271684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.739 [2024-10-15 13:03:48.272230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 [2024-10-15 13:03:48.406509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 [2024-10-15 13:03:48.418701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 null0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 null1 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1317655 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1317655 /tmp/host.sock 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1317655 ']' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:28.739 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 [2024-10-15 13:03:48.497061] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:23:28.739 [2024-10-15 13:03:48.497104] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317655 ] 00:23:28.739 [2024-10-15 13:03:48.563550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.739 [2024-10-15 13:03:48.605605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.739 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.740 13:03:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 [2024-10-15 13:03:49.024233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.740 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.999 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:29.000 13:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:29.567 [2024-10-15 13:03:49.764757] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.567 [2024-10-15 13:03:49.764779] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.567 [2024-10-15 13:03:49.764793] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.567 [2024-10-15 13:03:49.852051] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.825 [2024-10-15 13:03:49.915820] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.825 [2024-10-15 13:03:49.915838] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.083 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.341 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:30.341 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:30.341 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:30.341 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.341 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.342 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.600 [2024-10-15 13:03:50.692791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.600 [2024-10-15 13:03:50.693765] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.600 [2024-10-15 13:03:50.693793] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.600 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.601 [2024-10-15 13:03:50.820482] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:30.601 13:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:30.859 [2024-10-15 13:03:51.122117] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:30.859 [2024-10-15 13:03:51.122135] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:30.859 [2024-10-15 13:03:51.122140] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:31.794 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.795 [2024-10-15 13:03:51.933140] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:31.795 [2024-10-15 13:03:51.933163] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.795 [2024-10-15 13:03:51.933803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.795 [2024-10-15 13:03:51.933819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-10-15 13:03:51.933828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.795 [2024-10-15 13:03:51.933835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-10-15 13:03:51.933842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.795 [2024-10-15 13:03:51.933849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-10-15 13:03:51.933856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.795 [2024-10-15 13:03:51.933863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-10-15 13:03:51.933870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:31.795 [2024-10-15 13:03:51.943814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.953851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 [2024-10-15 13:03:51.954095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:51.954111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:51.954119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:51.954132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.954142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:51.954148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.795 [2024-10-15 13:03:51.954156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.795 [2024-10-15 13:03:51.954167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.795 [2024-10-15 13:03:51.963906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 [2024-10-15 13:03:51.964095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:51.964115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:51.964122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:51.964133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.964143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:51.964149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.795 [2024-10-15 13:03:51.964156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.795 [2024-10-15 13:03:51.964165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.795 [2024-10-15 13:03:51.973956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 [2024-10-15 13:03:51.974193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:51.974205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:51.974212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:51.974222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.974237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:51.974244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.795 [2024-10-15 13:03:51.974254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.795 [2024-10-15 13:03:51.974264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.795 [2024-10-15 13:03:51.984006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 [2024-10-15 13:03:51.984254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:51.984268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:51.984275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:51.984286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.984296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:51.984302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.795 [2024-10-15 13:03:51.984308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.795 [2024-10-15 13:03:51.984318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.795 [2024-10-15 13:03:51.994062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 13:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.795 [2024-10-15 13:03:51.994228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:51.994240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:51.994247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:51.994257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:51.994267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:51.994274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.795 [2024-10-15 13:03:51.994282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.795 [2024-10-15 13:03:51.994296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.795 [2024-10-15 13:03:52.004116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.795 [2024-10-15 13:03:52.004308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.795 [2024-10-15 13:03:52.004329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.795 [2024-10-15 13:03:52.004336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.795 [2024-10-15 13:03:52.004346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.795 [2024-10-15 13:03:52.004356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.795 [2024-10-15 13:03:52.004362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.796 [2024-10-15 13:03:52.004368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.796 [2024-10-15 13:03:52.004378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.796 [2024-10-15 13:03:52.014168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.796 [2024-10-15 13:03:52.014401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.796 [2024-10-15 13:03:52.014414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7450 with addr=10.0.0.2, port=4420 00:23:31.796 [2024-10-15 13:03:52.014420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7450 is same with the state(6) to be set 00:23:31.796 [2024-10-15 13:03:52.014431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7450 (9): Bad file descriptor 00:23:31.796 [2024-10-15 13:03:52.014446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.796 [2024-10-15 13:03:52.014453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.796 [2024-10-15 13:03:52.014459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.796 [2024-10-15 13:03:52.014469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.796 [2024-10-15 13:03:52.019035] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:31.796 [2024-10-15 13:03:52.019050] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.055 13:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.430 [2024-10-15 13:03:53.313265] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.430 [2024-10-15 13:03:53.313282] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.430 [2024-10-15 13:03:53.313293] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.430 [2024-10-15 13:03:53.399548] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:33.430 [2024-10-15 13:03:53.661851] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:33.431 [2024-10-15 13:03:53.661878] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.431 request: 00:23:33.431 { 00:23:33.431 "name": "nvme", 00:23:33.431 "trtype": "tcp", 00:23:33.431 "traddr": "10.0.0.2", 00:23:33.431 "adrfam": "ipv4", 00:23:33.431 "trsvcid": "8009", 00:23:33.431 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.431 "wait_for_attach": true, 00:23:33.431 "method": "bdev_nvme_start_discovery", 00:23:33.431 "req_id": 1 00:23:33.431 } 00:23:33.431 Got JSON-RPC error response 00:23:33.431 response: 00:23:33.431 { 00:23:33.431 "code": -17, 00:23:33.431 "message": "File exists" 00:23:33.431 } 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.431 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 request: 00:23:33.690 { 00:23:33.690 "name": "nvme_second", 00:23:33.690 "trtype": "tcp", 00:23:33.690 "traddr": "10.0.0.2", 00:23:33.690 "adrfam": "ipv4", 00:23:33.690 "trsvcid": "8009", 00:23:33.690 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.690 "wait_for_attach": true, 00:23:33.690 "method": "bdev_nvme_start_discovery", 00:23:33.690 "req_id": 1 00:23:33.690 } 00:23:33.690 Got JSON-RPC error response 00:23:33.690 response: 00:23:33.690 { 00:23:33.690 "code": -17, 00:23:33.690 "message": "File exists" 00:23:33.690 } 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.690 13:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.626 [2024-10-15 13:03:54.901303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.626 [2024-10-15 13:03:54.901331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec4890 with addr=10.0.0.2, port=8010 00:23:34.626 [2024-10-15 13:03:54.901346] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:34.626 [2024-10-15 13:03:54.901353] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:34.626 [2024-10-15 13:03:54.901359] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:36.005 [2024-10-15 13:03:55.903677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.005 [2024-10-15 13:03:55.903702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec4890 with addr=10.0.0.2, port=8010 00:23:36.005 [2024-10-15 13:03:55.903714] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:36.005 [2024-10-15 13:03:55.903720] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:36.005 [2024-10-15 13:03:55.903725] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:36.941 [2024-10-15 13:03:56.905908] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:36.942 request: 00:23:36.942 { 00:23:36.942 "name": "nvme_second", 00:23:36.942 "trtype": "tcp", 00:23:36.942 "traddr": "10.0.0.2", 00:23:36.942 "adrfam": "ipv4", 00:23:36.942 "trsvcid": "8010", 00:23:36.942 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:36.942 "wait_for_attach": false, 00:23:36.942 "attach_timeout_ms": 3000, 00:23:36.942 "method": "bdev_nvme_start_discovery", 00:23:36.942 "req_id": 1 00:23:36.942 } 00:23:36.942 Got JSON-RPC error response 00:23:36.942 response: 00:23:36.942 { 00:23:36.942 "code": -110, 00:23:36.942 "message": "Connection timed out" 00:23:36.942 } 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1317655 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.942 13:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.942 rmmod nvme_tcp 00:23:36.942 rmmod nvme_fabrics 00:23:36.942 rmmod nvme_keyring 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1317471 ']' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1317471 ']' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317471' 00:23:36.942 killing process with pid 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1317471 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.942 13:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.478 00:23:39.478 real 0m17.352s 00:23:39.478 user 0m20.750s 00:23:39.478 sys 0m5.862s 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.478 ************************************ 00:23:39.478 END TEST nvmf_host_discovery 00:23:39.478 ************************************ 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.478 ************************************ 00:23:39.478 START TEST nvmf_host_multipath_status 00:23:39.478 ************************************ 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:39.478 * Looking for test storage... 00:23:39.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:39.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.478 --rc genhtml_branch_coverage=1 00:23:39.478 --rc genhtml_function_coverage=1 00:23:39.478 --rc genhtml_legend=1 00:23:39.478 --rc geninfo_all_blocks=1 00:23:39.478 --rc geninfo_unexecuted_blocks=1 00:23:39.478 00:23:39.478 ' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:39.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.478 --rc genhtml_branch_coverage=1 00:23:39.478 --rc genhtml_function_coverage=1 00:23:39.478 --rc genhtml_legend=1 00:23:39.478 --rc geninfo_all_blocks=1 00:23:39.478 --rc geninfo_unexecuted_blocks=1 00:23:39.478 00:23:39.478 ' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:39.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.478 --rc genhtml_branch_coverage=1 00:23:39.478 --rc genhtml_function_coverage=1 00:23:39.478 --rc genhtml_legend=1 00:23:39.478 --rc geninfo_all_blocks=1 00:23:39.478 --rc geninfo_unexecuted_blocks=1 00:23:39.478 00:23:39.478 ' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:39.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.478 --rc genhtml_branch_coverage=1 00:23:39.478 --rc genhtml_function_coverage=1 00:23:39.478 --rc genhtml_legend=1 00:23:39.478 --rc geninfo_all_blocks=1 00:23:39.478 --rc geninfo_unexecuted_blocks=1 00:23:39.478 00:23:39.478 ' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.478 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.479 13:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:46.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:46.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.051 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:46.052 Found net devices under 0000:86:00.0: cvl_0_0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:46.052 Found net devices under 0000:86:00.1: cvl_0_1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:23:46.052 00:23:46.052 --- 10.0.0.2 ping statistics --- 00:23:46.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.052 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:23:46.052 00:23:46.052 --- 10.0.0.1 ping statistics --- 00:23:46.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.052 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1322683 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1322683 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1322683 ']' 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.052 [2024-10-15 13:04:05.561549] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:23:46.052 [2024-10-15 13:04:05.561596] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.052 [2024-10-15 13:04:05.634927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:46.052 [2024-10-15 13:04:05.676373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.052 [2024-10-15 13:04:05.676411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.052 [2024-10-15 13:04:05.676418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.052 [2024-10-15 13:04:05.676424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.052 [2024-10-15 13:04:05.676430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.052 [2024-10-15 13:04:05.677640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.052 [2024-10-15 13:04:05.677642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1322683 00:23:46.052 13:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:46.052 [2024-10-15 13:04:05.973774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.052 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:46.052 Malloc0 00:23:46.052 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:46.311 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.311 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.570 [2024-10-15 13:04:06.760693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.570 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.829 [2024-10-15 13:04:06.957181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1322966 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1322966 /var/tmp/bdevperf.sock 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1322966 ']' 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.829 13:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:47.087 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.087 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:47.087 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:47.345 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:47.602 Nvme0n1 00:23:47.602 13:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:48.236 Nvme0n1 00:23:48.236 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:48.236 13:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:50.211 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:50.211 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:50.211 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:50.469 13:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:51.439 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:51.439 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.439 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.439 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.698 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.698 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:51.698 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.698 13:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.956 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.956 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.956 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.956 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.215 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.473 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.473 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.473 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.473 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.732 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.732 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:52.732 13:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.990 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.990 13:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:54.366 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:54.366 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.366 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.366 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.367 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.367 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.367 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.367 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.625 13:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.883 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.883 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.883 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.883 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.141 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.141 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.141 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.141 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.400 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.400 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:55.400 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.658 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:55.658 13:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:57.034 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:57.034 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.034 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.034 13:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.034 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.034 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:57.034 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.034 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.293 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.554 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.554 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.554 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.554 13:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.816 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.816 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.816 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.816 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.074 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.074 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:58.075 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.333 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:58.333 13:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.706 13:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.965 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.224 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.224 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.224 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.224 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.482 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.482 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.482 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.482 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.740 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.740 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:00.740 13:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:00.740 13:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:00.998 13:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.373 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.633 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.633 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.633 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.633 13:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.892 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.892 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:02.892 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.892 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:03.150 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:03.408 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:03.666 13:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:04.599 13:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:04.599 13:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:04.599 13:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.599 13:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.858 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.858 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:04.858 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.858 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.116 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.374 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.374 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.374 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.375 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:05.375 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.375 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.633 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.633 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:05.633 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.633 13:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.892 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.892 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:06.150 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:06.150 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:06.408 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:06.408 13:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.785 13:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:07.785 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.785 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:07.785 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.785 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.044 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.044 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.044 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.044 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.303 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.303 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.303 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.303 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.562 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.562 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:08.562 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.562 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.820 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.820 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:08.820 13:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.079 13:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.079 13:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.457 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.715 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.715 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.715 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.715 13:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:10.715 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.715 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:10.715 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.715 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:10.973 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.973 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:10.973 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.973 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.231 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.232 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.232 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.232 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.490 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.490 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:11.490 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:11.748 13:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:11.748 13:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.122 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.381 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.639 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.639 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.639 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.639 13:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.897 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.897 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.897 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.897 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.155 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.155 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:14.155 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.413 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:14.671 13:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:15.605 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:15.605 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.606 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.606 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.864 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.865 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:15.865 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.865 13:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.124 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.382 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.382 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.382 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.382 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.641 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.641 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:16.641 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.641 13:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1322966 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1322966 ']' 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1322966 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1322966 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1322966' 00:24:16.900 killing process with pid 1322966 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1322966 00:24:16.900 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1322966 00:24:16.900 { 00:24:16.900 "results": [ 00:24:16.900 { 00:24:16.900 "job": "Nvme0n1", 00:24:16.900 "core_mask": "0x4", 00:24:16.900 "workload": "verify", 00:24:16.900 "status": "terminated", 00:24:16.900 "verify_range": { 00:24:16.900 "start": 0, 00:24:16.900 "length": 16384 00:24:16.900 }, 00:24:16.900 "queue_depth": 128, 00:24:16.900 "io_size": 4096, 00:24:16.900 "runtime": 28.724948, 00:24:16.900 "iops": 10728.687829130275, 00:24:16.900 "mibps": 41.908936832540135, 00:24:16.900 "io_failed": 0, 00:24:16.900 "io_timeout": 0, 00:24:16.900 "avg_latency_us": 11909.382487180925, 00:24:16.900 "min_latency_us": 464.2133333333333, 00:24:16.900 "max_latency_us": 3083812.083809524 00:24:16.900 } 00:24:16.900 ], 00:24:16.900 "core_count": 1 00:24:16.900 } 00:24:17.183 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1322966 00:24:17.183 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.183 [2024-10-15 13:04:07.033685] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:24:17.183 [2024-10-15 13:04:07.033743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322966 ] 00:24:17.183 [2024-10-15 13:04:07.105481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.183 [2024-10-15 13:04:07.145487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.184 Running I/O for 90 seconds... 00:24:17.184 11713.00 IOPS, 45.75 MiB/s [2024-10-15T11:04:37.503Z] 11719.50 IOPS, 45.78 MiB/s [2024-10-15T11:04:37.503Z] 11765.67 IOPS, 45.96 MiB/s [2024-10-15T11:04:37.503Z] 11695.50 IOPS, 45.69 MiB/s [2024-10-15T11:04:37.503Z] 11657.00 IOPS, 45.54 MiB/s [2024-10-15T11:04:37.503Z] 11651.00 IOPS, 45.51 MiB/s [2024-10-15T11:04:37.503Z] 11636.43 IOPS, 45.45 MiB/s [2024-10-15T11:04:37.503Z] 11620.75 IOPS, 45.39 MiB/s [2024-10-15T11:04:37.503Z] 11596.00 IOPS, 45.30 MiB/s [2024-10-15T11:04:37.503Z] 11589.20 IOPS, 45.27 MiB/s [2024-10-15T11:04:37.503Z] 11589.82 IOPS, 45.27 MiB/s [2024-10-15T11:04:37.503Z] 11582.17 IOPS, 45.24 MiB/s [2024-10-15T11:04:37.503Z] [2024-10-15 13:04:21.044541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.044919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.044926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.184 [2024-10-15 13:04:21.045484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.184 [2024-10-15 13:04:21.045496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.045992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.185 [2024-10-15 13:04:21.046701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.185 [2024-10-15 13:04:21.046707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.046991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.186 [2024-10-15 13:04:21.047294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.186 [2024-10-15 13:04:21.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.186 [2024-10-15 13:04:21.047509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.047521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.187 [2024-10-15 13:04:21.047528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.047540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.187 [2024-10-15 13:04:21.047547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.047559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.187 [2024-10-15 13:04:21.047566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.187 [2024-10-15 13:04:21.047586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.187 [2024-10-15 13:04:21.048132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.187 [2024-10-15 13:04:21.048814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.187 [2024-10-15 13:04:21.048829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.048987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.048999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.049582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.049591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.058980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.058994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.059016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.059039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.059047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.059061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.059069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.059082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.188 [2024-10-15 13:04:21.059090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.188 [2024-10-15 13:04:21.059104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.059991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.189 [2024-10-15 13:04:21.060460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.189 [2024-10-15 13:04:21.060474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.189 [2024-10-15 13:04:21.060482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.190 [2024-10-15 13:04:21.060843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.060991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.060999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.190 [2024-10-15 13:04:21.061392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.190 [2024-10-15 13:04:21.061400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.061864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.061872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.062981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.062995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.191 [2024-10-15 13:04:21.063174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.191 [2024-10-15 13:04:21.063182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.063596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.063609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.192 [2024-10-15 13:04:21.064348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.192 [2024-10-15 13:04:21.064356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.064606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.064643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.064651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.069975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.069984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.193 [2024-10-15 13:04:21.070155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.070985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.070994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.071010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.071018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.071034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.071043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.193 [2024-10-15 13:04:21.071058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.193 [2024-10-15 13:04:21.071067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.071979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.071995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.072004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.072021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.072030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.072045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.072069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.194 [2024-10-15 13:04:21.072078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.194 [2024-10-15 13:04:21.072093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.072781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.072790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.073980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.195 [2024-10-15 13:04:21.073989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.195 [2024-10-15 13:04:21.074005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.196 [2024-10-15 13:04:21.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.196 [2024-10-15 13:04:21.074807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-10-15 13:04:21.074816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.074832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.074840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.075999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.197 [2024-10-15 13:04:21.076565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-10-15 13:04:21.076577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.076969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.076981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.077275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.077285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.198 [2024-10-15 13:04:21.078708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.198 [2024-10-15 13:04:21.078727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.078978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.079464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.079977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.080008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.080038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.080049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.080068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.080079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.080098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.199 [2024-10-15 13:04:21.080109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.199 [2024-10-15 13:04:21.080128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.199 [2024-10-15 13:04:21.080139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.080158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.080169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.080189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.080200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.080220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.080230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.081987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.081998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.082017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.082028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.082047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.082058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.200 [2024-10-15 13:04:21.082077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.200 [2024-10-15 13:04:21.082088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.082980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.082999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.083012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.083032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.083043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.083940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.083962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.083985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.083996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.201 [2024-10-15 13:04:21.084239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.201 [2024-10-15 13:04:21.084259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.084979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.202 [2024-10-15 13:04:21.085284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.202 [2024-10-15 13:04:21.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.202 [2024-10-15 13:04:21.085504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.203 [2024-10-15 13:04:21.085676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.085695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.085716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.203 [2024-10-15 13:04:21.086856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.203 [2024-10-15 13:04:21.086863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.086978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.086986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.204 [2024-10-15 13:04:21.087561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.204 [2024-10-15 13:04:21.087568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.205 [2024-10-15 13:04:21.088948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.205 [2024-10-15 13:04:21.088956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.088971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.088978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.088991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.088999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.206 [2024-10-15 13:04:21.089465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.089478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.089485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.206 [2024-10-15 13:04:21.090186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.206 [2024-10-15 13:04:21.090193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.090981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.090989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.091001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.091009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.207 [2024-10-15 13:04:21.091021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.207 [2024-10-15 13:04:21.091028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.091986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.091999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.208 [2024-10-15 13:04:21.092320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.208 [2024-10-15 13:04:21.092327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.209 [2024-10-15 13:04:21.092854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.092989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.092996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.093009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.209 [2024-10-15 13:04:21.093016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.209 [2024-10-15 13:04:21.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.210 [2024-10-15 13:04:21.093199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.093987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.210 [2024-10-15 13:04:21.094322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.210 [2024-10-15 13:04:21.094335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.094987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.094995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.095008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.095015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.095028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.095036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.095049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.095056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.211 [2024-10-15 13:04:21.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.211 [2024-10-15 13:04:21.095644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.095985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.095992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.212 [2024-10-15 13:04:21.096441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.212 [2024-10-15 13:04:21.096448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.096466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.096503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.096524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.096543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.096825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.096832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.213 [2024-10-15 13:04:21.097382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.213 [2024-10-15 13:04:21.097751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.213 [2024-10-15 13:04:21.097763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.097989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.097996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.214 [2024-10-15 13:04:21.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.214 [2024-10-15 13:04:21.098441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.098572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.215 [2024-10-15 13:04:21.099781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.215 [2024-10-15 13:04:21.099793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.099982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.099989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.216 [2024-10-15 13:04:21.100890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.100985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.100997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.101015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.101034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.101053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.216 [2024-10-15 13:04:21.101090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.216 [2024-10-15 13:04:21.101097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.101755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.101762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.217 [2024-10-15 13:04:21.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.217 [2024-10-15 13:04:21.102317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.102983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.102994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.218 [2024-10-15 13:04:21.103001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.218 [2024-10-15 13:04:21.103013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.103829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.103992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.103998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.104133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.219 [2024-10-15 13:04:21.104152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.104170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.219 [2024-10-15 13:04:21.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.219 [2024-10-15 13:04:21.104201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.104208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.104220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.104227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.104239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.104245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.104258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.104265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.104956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.104966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.104992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.220 [2024-10-15 13:04:21.105831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.220 [2024-10-15 13:04:21.105847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.105981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.105988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.221 [2024-10-15 13:04:21.106783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.221 [2024-10-15 13:04:21.106798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:21.106957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:21.106964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.222 11317.23 IOPS, 44.21 MiB/s [2024-10-15T11:04:37.541Z] 10508.86 IOPS, 41.05 MiB/s [2024-10-15T11:04:37.541Z] 9808.27 IOPS, 38.31 MiB/s [2024-10-15T11:04:37.541Z] 9367.31 IOPS, 36.59 MiB/s [2024-10-15T11:04:37.541Z] 9486.00 IOPS, 37.05 MiB/s [2024-10-15T11:04:37.541Z] 9605.78 IOPS, 37.52 MiB/s [2024-10-15T11:04:37.541Z] 9793.63 IOPS, 38.26 MiB/s [2024-10-15T11:04:37.541Z] 9981.70 IOPS, 38.99 MiB/s [2024-10-15T11:04:37.541Z] 10150.43 IOPS, 39.65 MiB/s [2024-10-15T11:04:37.541Z] 10208.68 IOPS, 39.88 MiB/s [2024-10-15T11:04:37.541Z] 10261.00 IOPS, 40.08 MiB/s [2024-10-15T11:04:37.541Z] 10345.96 IOPS, 40.41 MiB/s [2024-10-15T11:04:37.541Z] 10483.12 IOPS, 40.95 MiB/s [2024-10-15T11:04:37.541Z] 10598.65 IOPS, 41.40 MiB/s [2024-10-15T11:04:37.541Z] [2024-10-15 13:04:34.740647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.740825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.740837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.740844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.741254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.741261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.742327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.742353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.742373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.742395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.222 [2024-10-15 13:04:34.742414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.742433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.742452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.742471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.742489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.222 [2024-10-15 13:04:34.742501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.222 [2024-10-15 13:04:34.742509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.742782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.223 [2024-10-15 13:04:34.742801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.223 [2024-10-15 13:04:34.742820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.223 [2024-10-15 13:04:34.742838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.223 [2024-10-15 13:04:34.742857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.742872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.223 [2024-10-15 13:04:34.742878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.223 [2024-10-15 13:04:34.743803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.223 [2024-10-15 13:04:34.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.223 10671.89 IOPS, 41.69 MiB/s [2024-10-15T11:04:37.542Z] 10709.46 IOPS, 41.83 MiB/s [2024-10-15T11:04:37.542Z] Received shutdown signal, test time was about 28.725580 seconds 00:24:17.223 00:24:17.224 Latency(us) 00:24:17.224 [2024-10-15T11:04:37.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.224 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.224 Verification LBA range: start 0x0 length 0x4000 00:24:17.224 Nvme0n1 : 28.72 10728.69 41.91 0.00 0.00 11909.38 464.21 3083812.08 00:24:17.224 [2024-10-15T11:04:37.543Z] =================================================================================================================== 00:24:17.224 [2024-10-15T11:04:37.543Z] Total : 10728.69 41.91 0.00 0.00 11909.38 464.21 3083812.08 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.224 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.224 rmmod nvme_tcp 00:24:17.483 rmmod nvme_fabrics 00:24:17.483 rmmod nvme_keyring 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1322683 ']' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1322683 ']' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1322683' 00:24:17.483 killing process with pid 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1322683 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.483 13:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.036 00:24:20.036 real 0m40.468s 00:24:20.036 user 1m49.452s 00:24:20.036 sys 0m11.727s 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.036 ************************************ 00:24:20.036 END TEST nvmf_host_multipath_status 00:24:20.036 ************************************ 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.036 ************************************ 00:24:20.036 START TEST nvmf_discovery_remove_ifc 00:24:20.036 ************************************ 00:24:20.036 13:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.036 * Looking for test storage... 00:24:20.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.036 --rc genhtml_branch_coverage=1 00:24:20.036 --rc genhtml_function_coverage=1 00:24:20.036 --rc genhtml_legend=1 00:24:20.036 --rc geninfo_all_blocks=1 00:24:20.036 --rc geninfo_unexecuted_blocks=1 00:24:20.036 00:24:20.036 ' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.036 --rc genhtml_branch_coverage=1 00:24:20.036 --rc genhtml_function_coverage=1 00:24:20.036 --rc genhtml_legend=1 00:24:20.036 --rc geninfo_all_blocks=1 00:24:20.036 --rc geninfo_unexecuted_blocks=1 00:24:20.036 00:24:20.036 ' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.036 --rc genhtml_branch_coverage=1 00:24:20.036 --rc genhtml_function_coverage=1 00:24:20.036 --rc genhtml_legend=1 00:24:20.036 --rc geninfo_all_blocks=1 00:24:20.036 --rc geninfo_unexecuted_blocks=1 00:24:20.036 00:24:20.036 ' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.036 --rc genhtml_branch_coverage=1 00:24:20.036 --rc genhtml_function_coverage=1 00:24:20.036 --rc genhtml_legend=1 00:24:20.036 --rc geninfo_all_blocks=1 00:24:20.036 --rc geninfo_unexecuted_blocks=1 00:24:20.036 00:24:20.036 ' 00:24:20.036 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.037 13:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:26.604 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:26.604 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.604 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:26.605 Found net devices under 0000:86:00.0: cvl_0_0 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:26.605 Found net devices under 0000:86:00.1: cvl_0_1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.605 13:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:24:26.605 00:24:26.605 --- 10.0.0.2 ping statistics --- 00:24:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.605 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:26.605 00:24:26.605 --- 10.0.0.1 ping statistics --- 00:24:26.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.605 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1331586 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1331586 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1331586 ']' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.605 [2024-10-15 13:04:46.118771] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:24:26.605 [2024-10-15 13:04:46.118816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.605 [2024-10-15 13:04:46.188048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.605 [2024-10-15 13:04:46.226254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.605 [2024-10-15 13:04:46.226288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.605 [2024-10-15 13:04:46.226295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.605 [2024-10-15 13:04:46.226302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.605 [2024-10-15 13:04:46.226307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.605 [2024-10-15 13:04:46.226920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.605 [2024-10-15 13:04:46.380743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.605 [2024-10-15 13:04:46.388949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:26.605 null0 00:24:26.605 [2024-10-15 13:04:46.420907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1331626 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1331626 /tmp/host.sock 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1331626 ']' 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:26.605 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:26.606 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.606 [2024-10-15 13:04:46.491476] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:24:26.606 [2024-10-15 13:04:46.491517] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331626 ] 00:24:26.606 [2024-10-15 13:04:46.560185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.606 [2024-10-15 13:04:46.602935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.606 13:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 [2024-10-15 13:04:47.777751] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:27.549 [2024-10-15 13:04:47.777772] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:27.549 [2024-10-15 13:04:47.777787] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.549 [2024-10-15 13:04:47.864042] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:27.836 [2024-10-15 13:04:47.961680] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:27.836 [2024-10-15 13:04:47.961724] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:27.836 [2024-10-15 13:04:47.961744] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:27.836 [2024-10-15 13:04:47.961757] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:27.836 [2024-10-15 13:04:47.961774] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:27.836 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.836 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:27.836 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.836 [2024-10-15 13:04:47.966089] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ea3a50 was disconnected and freed. delete nvme_qpair. 00:24:27.836 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.836 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.837 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.837 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.837 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.837 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.837 13:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.837 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.101 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.101 13:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.035 13:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.971 13:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.348 13:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.285 13:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.361 [2024-10-15 13:04:53.403431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:33.361 [2024-10-15 13:04:53.403469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.361 [2024-10-15 13:04:53.403495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.361 [2024-10-15 13:04:53.403505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.361 [2024-10-15 13:04:53.403511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.361 [2024-10-15 13:04:53.403519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.361 [2024-10-15 13:04:53.403526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.361 [2024-10-15 13:04:53.403533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.361 [2024-10-15 13:04:53.403540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.361 [2024-10-15 13:04:53.403547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.361 [2024-10-15 13:04:53.403553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.361 [2024-10-15 13:04:53.403560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e802e0 is same with the state(6) to be set 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.361 13:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.361 [2024-10-15 13:04:53.413453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e802e0 (9): Bad file descriptor 00:24:33.361 [2024-10-15 13:04:53.423491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.297 [2024-10-15 13:04:54.429639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:34.297 [2024-10-15 13:04:54.429727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e802e0 with addr=10.0.0.2, port=4420 00:24:34.297 [2024-10-15 13:04:54.429761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e802e0 is same with the state(6) to be set 00:24:34.297 [2024-10-15 13:04:54.429820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e802e0 (9): Bad file descriptor 00:24:34.297 [2024-10-15 13:04:54.430778] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:34.297 [2024-10-15 13:04:54.430843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:34.297 [2024-10-15 13:04:54.430874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:34.297 [2024-10-15 13:04:54.430898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:34.297 [2024-10-15 13:04:54.430961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.297 [2024-10-15 13:04:54.430987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.297 13:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.233 [2024-10-15 13:04:55.433480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:35.233 [2024-10-15 13:04:55.433503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:35.233 [2024-10-15 13:04:55.433510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:35.233 [2024-10-15 13:04:55.433517] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:35.233 [2024-10-15 13:04:55.433529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.233 [2024-10-15 13:04:55.433547] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:35.233 [2024-10-15 13:04:55.433567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.233 [2024-10-15 13:04:55.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.233 [2024-10-15 13:04:55.433585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.233 [2024-10-15 13:04:55.433592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.233 [2024-10-15 13:04:55.433599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.233 [2024-10-15 13:04:55.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.233 [2024-10-15 13:04:55.433616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.233 [2024-10-15 13:04:55.433623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.233 [2024-10-15 13:04:55.433630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.233 [2024-10-15 13:04:55.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.233 [2024-10-15 13:04:55.433642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:35.233 [2024-10-15 13:04:55.434014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6f9c0 (9): Bad file descriptor 00:24:35.233 [2024-10-15 13:04:55.435025] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:35.233 [2024-10-15 13:04:55.435035] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.233 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:35.492 13:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:36.428 13:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.363 [2024-10-15 13:04:57.492767] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.363 [2024-10-15 13:04:57.492784] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.363 [2024-10-15 13:04:57.492796] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.363 [2024-10-15 13:04:57.579059] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:37.621 13:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.621 [2024-10-15 13:04:57.755673] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:37.621 [2024-10-15 13:04:57.755708] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:37.621 [2024-10-15 13:04:57.755726] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:37.621 [2024-10-15 13:04:57.755739] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:37.621 [2024-10-15 13:04:57.755745] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.621 [2024-10-15 13:04:57.761091] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e7b9f0 was disconnected and freed. delete nvme_qpair. 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1331626 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1331626 ']' 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1331626 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331626 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331626' 00:24:38.557 killing process with pid 1331626 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1331626 00:24:38.557 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1331626 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.817 13:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.817 rmmod nvme_tcp 00:24:38.817 rmmod nvme_fabrics 00:24:38.817 rmmod nvme_keyring 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1331586 ']' 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1331586 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1331586 ']' 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1331586 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331586 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331586' 00:24:38.817 killing process with pid 1331586 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1331586 00:24:38.817 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1331586 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.075 13:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.611 00:24:41.611 real 0m21.422s 00:24:41.611 user 0m26.592s 00:24:41.611 sys 0m5.928s 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.611 ************************************ 00:24:41.611 END TEST nvmf_discovery_remove_ifc 00:24:41.611 ************************************ 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.611 ************************************ 00:24:41.611 START TEST nvmf_identify_kernel_target 00:24:41.611 ************************************ 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.611 * Looking for test storage... 00:24:41.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.611 --rc genhtml_branch_coverage=1 00:24:41.611 --rc genhtml_function_coverage=1 00:24:41.611 --rc genhtml_legend=1 00:24:41.611 --rc geninfo_all_blocks=1 00:24:41.611 --rc geninfo_unexecuted_blocks=1 00:24:41.611 00:24:41.611 ' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.611 --rc genhtml_branch_coverage=1 00:24:41.611 --rc genhtml_function_coverage=1 00:24:41.611 --rc genhtml_legend=1 00:24:41.611 --rc geninfo_all_blocks=1 00:24:41.611 --rc geninfo_unexecuted_blocks=1 00:24:41.611 00:24:41.611 ' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.611 --rc genhtml_branch_coverage=1 00:24:41.611 --rc genhtml_function_coverage=1 00:24:41.611 --rc genhtml_legend=1 00:24:41.611 --rc geninfo_all_blocks=1 00:24:41.611 --rc geninfo_unexecuted_blocks=1 00:24:41.611 00:24:41.611 ' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.611 --rc genhtml_branch_coverage=1 00:24:41.611 --rc genhtml_function_coverage=1 00:24:41.611 --rc genhtml_legend=1 00:24:41.611 --rc geninfo_all_blocks=1 00:24:41.611 --rc geninfo_unexecuted_blocks=1 00:24:41.611 00:24:41.611 ' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.611 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.612 13:05:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:48.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:48.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.183 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:48.184 Found net devices under 0000:86:00.0: cvl_0_0 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:48.184 Found net devices under 0000:86:00.1: cvl_0_1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:24:48.184 00:24:48.184 --- 10.0.0.2 ping statistics --- 00:24:48.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.184 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:48.184 00:24:48.184 --- 10.0.0.1 ping statistics --- 00:24:48.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.184 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:48.184 13:05:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:50.090 Waiting for block devices as requested 00:24:50.090 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:50.349 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:50.349 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:50.349 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:50.609 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:50.609 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:50.609 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:50.868 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:50.868 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:50.868 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:51.127 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:51.127 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:51.127 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:51.127 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:51.386 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:51.386 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:51.386 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:51.644 No valid GPT data, bailing 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:51.644 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:51.644 00:24:51.644 Discovery Log Number of Records 2, Generation counter 2 00:24:51.644 =====Discovery Log Entry 0====== 00:24:51.645 trtype: tcp 00:24:51.645 adrfam: ipv4 00:24:51.645 subtype: current discovery subsystem 00:24:51.645 treq: not specified, sq flow control disable supported 00:24:51.645 portid: 1 00:24:51.645 trsvcid: 4420 00:24:51.645 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:51.645 traddr: 10.0.0.1 00:24:51.645 eflags: none 00:24:51.645 sectype: none 00:24:51.645 =====Discovery Log Entry 1====== 00:24:51.645 trtype: tcp 00:24:51.645 adrfam: ipv4 00:24:51.645 subtype: nvme subsystem 00:24:51.645 treq: not specified, sq flow control disable supported 00:24:51.645 portid: 1 00:24:51.645 trsvcid: 4420 00:24:51.645 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:51.645 traddr: 10.0.0.1 00:24:51.645 eflags: none 00:24:51.645 sectype: none 00:24:51.645 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:51.645 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:51.904 ===================================================== 00:24:51.905 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:51.905 ===================================================== 00:24:51.905 Controller Capabilities/Features 00:24:51.905 ================================ 00:24:51.905 Vendor ID: 0000 00:24:51.905 Subsystem Vendor ID: 0000 00:24:51.905 Serial Number: 6d85ef94fb80820253ce 00:24:51.905 Model Number: Linux 00:24:51.905 Firmware Version: 6.8.9-20 00:24:51.905 Recommended Arb Burst: 0 00:24:51.905 IEEE OUI Identifier: 00 00 00 00:24:51.905 Multi-path I/O 00:24:51.905 May have multiple subsystem ports: No 00:24:51.905 May have multiple controllers: No 00:24:51.905 Associated with SR-IOV VF: No 00:24:51.905 Max Data Transfer Size: Unlimited 00:24:51.905 Max Number of Namespaces: 0 00:24:51.905 Max Number of I/O Queues: 1024 00:24:51.905 NVMe Specification Version (VS): 1.3 00:24:51.905 NVMe Specification Version (Identify): 1.3 00:24:51.905 Maximum Queue Entries: 1024 00:24:51.905 Contiguous Queues Required: No 00:24:51.905 Arbitration Mechanisms Supported 00:24:51.905 Weighted Round Robin: Not Supported 00:24:51.905 Vendor Specific: Not Supported 00:24:51.905 Reset Timeout: 7500 ms 00:24:51.905 Doorbell Stride: 4 bytes 00:24:51.905 NVM Subsystem Reset: Not Supported 00:24:51.905 Command Sets Supported 00:24:51.905 NVM Command Set: Supported 00:24:51.905 Boot Partition: Not Supported 00:24:51.905 Memory Page Size Minimum: 4096 bytes 00:24:51.905 Memory Page Size Maximum: 4096 bytes 00:24:51.905 Persistent Memory Region: Not Supported 00:24:51.905 Optional Asynchronous Events Supported 00:24:51.905 Namespace Attribute Notices: Not Supported 00:24:51.905 Firmware Activation Notices: Not Supported 00:24:51.905 ANA Change Notices: Not Supported 00:24:51.905 PLE Aggregate Log Change Notices: Not Supported 00:24:51.905 LBA Status Info Alert Notices: Not Supported 00:24:51.905 EGE Aggregate Log Change Notices: Not Supported 00:24:51.905 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.905 Zone Descriptor Change Notices: Not Supported 00:24:51.905 Discovery Log Change Notices: Supported 00:24:51.905 Controller Attributes 00:24:51.905 128-bit Host Identifier: Not Supported 00:24:51.905 Non-Operational Permissive Mode: Not Supported 00:24:51.905 NVM Sets: Not Supported 00:24:51.905 Read Recovery Levels: Not Supported 00:24:51.905 Endurance Groups: Not Supported 00:24:51.905 Predictable Latency Mode: Not Supported 00:24:51.905 Traffic Based Keep ALive: Not Supported 00:24:51.905 Namespace Granularity: Not Supported 00:24:51.905 SQ Associations: Not Supported 00:24:51.905 UUID List: Not Supported 00:24:51.905 Multi-Domain Subsystem: Not Supported 00:24:51.905 Fixed Capacity Management: Not Supported 00:24:51.905 Variable Capacity Management: Not Supported 00:24:51.905 Delete Endurance Group: Not Supported 00:24:51.905 Delete NVM Set: Not Supported 00:24:51.905 Extended LBA Formats Supported: Not Supported 00:24:51.905 Flexible Data Placement Supported: Not Supported 00:24:51.905 00:24:51.905 Controller Memory Buffer Support 00:24:51.905 ================================ 00:24:51.905 Supported: No 00:24:51.905 00:24:51.905 Persistent Memory Region Support 00:24:51.905 ================================ 00:24:51.905 Supported: No 00:24:51.905 00:24:51.905 Admin Command Set Attributes 00:24:51.905 ============================ 00:24:51.905 Security Send/Receive: Not Supported 00:24:51.905 Format NVM: Not Supported 00:24:51.905 Firmware Activate/Download: Not Supported 00:24:51.905 Namespace Management: Not Supported 00:24:51.905 Device Self-Test: Not Supported 00:24:51.905 Directives: Not Supported 00:24:51.905 NVMe-MI: Not Supported 00:24:51.905 Virtualization Management: Not Supported 00:24:51.905 Doorbell Buffer Config: Not Supported 00:24:51.905 Get LBA Status Capability: Not Supported 00:24:51.905 Command & Feature Lockdown Capability: Not Supported 00:24:51.905 Abort Command Limit: 1 00:24:51.905 Async Event Request Limit: 1 00:24:51.905 Number of Firmware Slots: N/A 00:24:51.905 Firmware Slot 1 Read-Only: N/A 00:24:51.905 Firmware Activation Without Reset: N/A 00:24:51.905 Multiple Update Detection Support: N/A 00:24:51.905 Firmware Update Granularity: No Information Provided 00:24:51.905 Per-Namespace SMART Log: No 00:24:51.905 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.905 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:51.905 Command Effects Log Page: Not Supported 00:24:51.905 Get Log Page Extended Data: Supported 00:24:51.905 Telemetry Log Pages: Not Supported 00:24:51.905 Persistent Event Log Pages: Not Supported 00:24:51.905 Supported Log Pages Log Page: May Support 00:24:51.905 Commands Supported & Effects Log Page: Not Supported 00:24:51.905 Feature Identifiers & Effects Log Page:May Support 00:24:51.905 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.905 Data Area 4 for Telemetry Log: Not Supported 00:24:51.905 Error Log Page Entries Supported: 1 00:24:51.905 Keep Alive: Not Supported 00:24:51.905 00:24:51.905 NVM Command Set Attributes 00:24:51.905 ========================== 00:24:51.905 Submission Queue Entry Size 00:24:51.905 Max: 1 00:24:51.905 Min: 1 00:24:51.905 Completion Queue Entry Size 00:24:51.905 Max: 1 00:24:51.905 Min: 1 00:24:51.905 Number of Namespaces: 0 00:24:51.905 Compare Command: Not Supported 00:24:51.905 Write Uncorrectable Command: Not Supported 00:24:51.905 Dataset Management Command: Not Supported 00:24:51.905 Write Zeroes Command: Not Supported 00:24:51.905 Set Features Save Field: Not Supported 00:24:51.905 Reservations: Not Supported 00:24:51.905 Timestamp: Not Supported 00:24:51.905 Copy: Not Supported 00:24:51.905 Volatile Write Cache: Not Present 00:24:51.905 Atomic Write Unit (Normal): 1 00:24:51.905 Atomic Write Unit (PFail): 1 00:24:51.905 Atomic Compare & Write Unit: 1 00:24:51.905 Fused Compare & Write: Not Supported 00:24:51.905 Scatter-Gather List 00:24:51.905 SGL Command Set: Supported 00:24:51.905 SGL Keyed: Not Supported 00:24:51.905 SGL Bit Bucket Descriptor: Not Supported 00:24:51.905 SGL Metadata Pointer: Not Supported 00:24:51.905 Oversized SGL: Not Supported 00:24:51.905 SGL Metadata Address: Not Supported 00:24:51.905 SGL Offset: Supported 00:24:51.905 Transport SGL Data Block: Not Supported 00:24:51.905 Replay Protected Memory Block: Not Supported 00:24:51.905 00:24:51.905 Firmware Slot Information 00:24:51.905 ========================= 00:24:51.905 Active slot: 0 00:24:51.905 00:24:51.905 00:24:51.905 Error Log 00:24:51.905 ========= 00:24:51.905 00:24:51.905 Active Namespaces 00:24:51.905 ================= 00:24:51.905 Discovery Log Page 00:24:51.905 ================== 00:24:51.905 Generation Counter: 2 00:24:51.905 Number of Records: 2 00:24:51.905 Record Format: 0 00:24:51.905 00:24:51.905 Discovery Log Entry 0 00:24:51.905 ---------------------- 00:24:51.905 Transport Type: 3 (TCP) 00:24:51.905 Address Family: 1 (IPv4) 00:24:51.905 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:51.905 Entry Flags: 00:24:51.905 Duplicate Returned Information: 0 00:24:51.905 Explicit Persistent Connection Support for Discovery: 0 00:24:51.905 Transport Requirements: 00:24:51.905 Secure Channel: Not Specified 00:24:51.905 Port ID: 1 (0x0001) 00:24:51.905 Controller ID: 65535 (0xffff) 00:24:51.905 Admin Max SQ Size: 32 00:24:51.905 Transport Service Identifier: 4420 00:24:51.905 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:51.905 Transport Address: 10.0.0.1 00:24:51.905 Discovery Log Entry 1 00:24:51.905 ---------------------- 00:24:51.905 Transport Type: 3 (TCP) 00:24:51.905 Address Family: 1 (IPv4) 00:24:51.905 Subsystem Type: 2 (NVM Subsystem) 00:24:51.905 Entry Flags: 00:24:51.905 Duplicate Returned Information: 0 00:24:51.905 Explicit Persistent Connection Support for Discovery: 0 00:24:51.905 Transport Requirements: 00:24:51.905 Secure Channel: Not Specified 00:24:51.905 Port ID: 1 (0x0001) 00:24:51.905 Controller ID: 65535 (0xffff) 00:24:51.905 Admin Max SQ Size: 32 00:24:51.905 Transport Service Identifier: 4420 00:24:51.905 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:51.905 Transport Address: 10.0.0.1 00:24:51.905 13:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:51.905 get_feature(0x01) failed 00:24:51.905 get_feature(0x02) failed 00:24:51.905 get_feature(0x04) failed 00:24:51.905 ===================================================== 00:24:51.905 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:51.905 ===================================================== 00:24:51.905 Controller Capabilities/Features 00:24:51.905 ================================ 00:24:51.905 Vendor ID: 0000 00:24:51.905 Subsystem Vendor ID: 0000 00:24:51.905 Serial Number: ef665dfdb5f533a70347 00:24:51.905 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:51.905 Firmware Version: 6.8.9-20 00:24:51.905 Recommended Arb Burst: 6 00:24:51.905 IEEE OUI Identifier: 00 00 00 00:24:51.905 Multi-path I/O 00:24:51.905 May have multiple subsystem ports: Yes 00:24:51.905 May have multiple controllers: Yes 00:24:51.905 Associated with SR-IOV VF: No 00:24:51.906 Max Data Transfer Size: Unlimited 00:24:51.906 Max Number of Namespaces: 1024 00:24:51.906 Max Number of I/O Queues: 128 00:24:51.906 NVMe Specification Version (VS): 1.3 00:24:51.906 NVMe Specification Version (Identify): 1.3 00:24:51.906 Maximum Queue Entries: 1024 00:24:51.906 Contiguous Queues Required: No 00:24:51.906 Arbitration Mechanisms Supported 00:24:51.906 Weighted Round Robin: Not Supported 00:24:51.906 Vendor Specific: Not Supported 00:24:51.906 Reset Timeout: 7500 ms 00:24:51.906 Doorbell Stride: 4 bytes 00:24:51.906 NVM Subsystem Reset: Not Supported 00:24:51.906 Command Sets Supported 00:24:51.906 NVM Command Set: Supported 00:24:51.906 Boot Partition: Not Supported 00:24:51.906 Memory Page Size Minimum: 4096 bytes 00:24:51.906 Memory Page Size Maximum: 4096 bytes 00:24:51.906 Persistent Memory Region: Not Supported 00:24:51.906 Optional Asynchronous Events Supported 00:24:51.906 Namespace Attribute Notices: Supported 00:24:51.906 Firmware Activation Notices: Not Supported 00:24:51.906 ANA Change Notices: Supported 00:24:51.906 PLE Aggregate Log Change Notices: Not Supported 00:24:51.906 LBA Status Info Alert Notices: Not Supported 00:24:51.906 EGE Aggregate Log Change Notices: Not Supported 00:24:51.906 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.906 Zone Descriptor Change Notices: Not Supported 00:24:51.906 Discovery Log Change Notices: Not Supported 00:24:51.906 Controller Attributes 00:24:51.906 128-bit Host Identifier: Supported 00:24:51.906 Non-Operational Permissive Mode: Not Supported 00:24:51.906 NVM Sets: Not Supported 00:24:51.906 Read Recovery Levels: Not Supported 00:24:51.906 Endurance Groups: Not Supported 00:24:51.906 Predictable Latency Mode: Not Supported 00:24:51.906 Traffic Based Keep ALive: Supported 00:24:51.906 Namespace Granularity: Not Supported 00:24:51.906 SQ Associations: Not Supported 00:24:51.906 UUID List: Not Supported 00:24:51.906 Multi-Domain Subsystem: Not Supported 00:24:51.906 Fixed Capacity Management: Not Supported 00:24:51.906 Variable Capacity Management: Not Supported 00:24:51.906 Delete Endurance Group: Not Supported 00:24:51.906 Delete NVM Set: Not Supported 00:24:51.906 Extended LBA Formats Supported: Not Supported 00:24:51.906 Flexible Data Placement Supported: Not Supported 00:24:51.906 00:24:51.906 Controller Memory Buffer Support 00:24:51.906 ================================ 00:24:51.906 Supported: No 00:24:51.906 00:24:51.906 Persistent Memory Region Support 00:24:51.906 ================================ 00:24:51.906 Supported: No 00:24:51.906 00:24:51.906 Admin Command Set Attributes 00:24:51.906 ============================ 00:24:51.906 Security Send/Receive: Not Supported 00:24:51.906 Format NVM: Not Supported 00:24:51.906 Firmware Activate/Download: Not Supported 00:24:51.906 Namespace Management: Not Supported 00:24:51.906 Device Self-Test: Not Supported 00:24:51.906 Directives: Not Supported 00:24:51.906 NVMe-MI: Not Supported 00:24:51.906 Virtualization Management: Not Supported 00:24:51.906 Doorbell Buffer Config: Not Supported 00:24:51.906 Get LBA Status Capability: Not Supported 00:24:51.906 Command & Feature Lockdown Capability: Not Supported 00:24:51.906 Abort Command Limit: 4 00:24:51.906 Async Event Request Limit: 4 00:24:51.906 Number of Firmware Slots: N/A 00:24:51.906 Firmware Slot 1 Read-Only: N/A 00:24:51.906 Firmware Activation Without Reset: N/A 00:24:51.906 Multiple Update Detection Support: N/A 00:24:51.906 Firmware Update Granularity: No Information Provided 00:24:51.906 Per-Namespace SMART Log: Yes 00:24:51.906 Asymmetric Namespace Access Log Page: Supported 00:24:51.906 ANA Transition Time : 10 sec 00:24:51.906 00:24:51.906 Asymmetric Namespace Access Capabilities 00:24:51.906 ANA Optimized State : Supported 00:24:51.906 ANA Non-Optimized State : Supported 00:24:51.906 ANA Inaccessible State : Supported 00:24:51.906 ANA Persistent Loss State : Supported 00:24:51.906 ANA Change State : Supported 00:24:51.906 ANAGRPID is not changed : No 00:24:51.906 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:51.906 00:24:51.906 ANA Group Identifier Maximum : 128 00:24:51.906 Number of ANA Group Identifiers : 128 00:24:51.906 Max Number of Allowed Namespaces : 1024 00:24:51.906 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:51.906 Command Effects Log Page: Supported 00:24:51.906 Get Log Page Extended Data: Supported 00:24:51.906 Telemetry Log Pages: Not Supported 00:24:51.906 Persistent Event Log Pages: Not Supported 00:24:51.906 Supported Log Pages Log Page: May Support 00:24:51.906 Commands Supported & Effects Log Page: Not Supported 00:24:51.906 Feature Identifiers & Effects Log Page:May Support 00:24:51.906 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.906 Data Area 4 for Telemetry Log: Not Supported 00:24:51.906 Error Log Page Entries Supported: 128 00:24:51.906 Keep Alive: Supported 00:24:51.906 Keep Alive Granularity: 1000 ms 00:24:51.906 00:24:51.906 NVM Command Set Attributes 00:24:51.906 ========================== 00:24:51.906 Submission Queue Entry Size 00:24:51.906 Max: 64 00:24:51.906 Min: 64 00:24:51.906 Completion Queue Entry Size 00:24:51.906 Max: 16 00:24:51.906 Min: 16 00:24:51.906 Number of Namespaces: 1024 00:24:51.906 Compare Command: Not Supported 00:24:51.906 Write Uncorrectable Command: Not Supported 00:24:51.906 Dataset Management Command: Supported 00:24:51.906 Write Zeroes Command: Supported 00:24:51.906 Set Features Save Field: Not Supported 00:24:51.906 Reservations: Not Supported 00:24:51.906 Timestamp: Not Supported 00:24:51.906 Copy: Not Supported 00:24:51.906 Volatile Write Cache: Present 00:24:51.906 Atomic Write Unit (Normal): 1 00:24:51.906 Atomic Write Unit (PFail): 1 00:24:51.906 Atomic Compare & Write Unit: 1 00:24:51.906 Fused Compare & Write: Not Supported 00:24:51.906 Scatter-Gather List 00:24:51.906 SGL Command Set: Supported 00:24:51.906 SGL Keyed: Not Supported 00:24:51.906 SGL Bit Bucket Descriptor: Not Supported 00:24:51.906 SGL Metadata Pointer: Not Supported 00:24:51.906 Oversized SGL: Not Supported 00:24:51.906 SGL Metadata Address: Not Supported 00:24:51.906 SGL Offset: Supported 00:24:51.906 Transport SGL Data Block: Not Supported 00:24:51.906 Replay Protected Memory Block: Not Supported 00:24:51.906 00:24:51.906 Firmware Slot Information 00:24:51.906 ========================= 00:24:51.906 Active slot: 0 00:24:51.906 00:24:51.906 Asymmetric Namespace Access 00:24:51.906 =========================== 00:24:51.906 Change Count : 0 00:24:51.906 Number of ANA Group Descriptors : 1 00:24:51.906 ANA Group Descriptor : 0 00:24:51.906 ANA Group ID : 1 00:24:51.906 Number of NSID Values : 1 00:24:51.906 Change Count : 0 00:24:51.906 ANA State : 1 00:24:51.906 Namespace Identifier : 1 00:24:51.906 00:24:51.906 Commands Supported and Effects 00:24:51.906 ============================== 00:24:51.906 Admin Commands 00:24:51.906 -------------- 00:24:51.906 Get Log Page (02h): Supported 00:24:51.906 Identify (06h): Supported 00:24:51.906 Abort (08h): Supported 00:24:51.906 Set Features (09h): Supported 00:24:51.906 Get Features (0Ah): Supported 00:24:51.906 Asynchronous Event Request (0Ch): Supported 00:24:51.906 Keep Alive (18h): Supported 00:24:51.906 I/O Commands 00:24:51.906 ------------ 00:24:51.906 Flush (00h): Supported 00:24:51.906 Write (01h): Supported LBA-Change 00:24:51.906 Read (02h): Supported 00:24:51.906 Write Zeroes (08h): Supported LBA-Change 00:24:51.906 Dataset Management (09h): Supported 00:24:51.906 00:24:51.906 Error Log 00:24:51.906 ========= 00:24:51.906 Entry: 0 00:24:51.906 Error Count: 0x3 00:24:51.906 Submission Queue Id: 0x0 00:24:51.906 Command Id: 0x5 00:24:51.906 Phase Bit: 0 00:24:51.906 Status Code: 0x2 00:24:51.906 Status Code Type: 0x0 00:24:51.906 Do Not Retry: 1 00:24:51.906 Error Location: 0x28 00:24:51.906 LBA: 0x0 00:24:51.906 Namespace: 0x0 00:24:51.906 Vendor Log Page: 0x0 00:24:51.906 ----------- 00:24:51.906 Entry: 1 00:24:51.906 Error Count: 0x2 00:24:51.906 Submission Queue Id: 0x0 00:24:51.906 Command Id: 0x5 00:24:51.906 Phase Bit: 0 00:24:51.906 Status Code: 0x2 00:24:51.906 Status Code Type: 0x0 00:24:51.906 Do Not Retry: 1 00:24:51.906 Error Location: 0x28 00:24:51.906 LBA: 0x0 00:24:51.906 Namespace: 0x0 00:24:51.906 Vendor Log Page: 0x0 00:24:51.906 ----------- 00:24:51.906 Entry: 2 00:24:51.906 Error Count: 0x1 00:24:51.906 Submission Queue Id: 0x0 00:24:51.906 Command Id: 0x4 00:24:51.906 Phase Bit: 0 00:24:51.906 Status Code: 0x2 00:24:51.906 Status Code Type: 0x0 00:24:51.906 Do Not Retry: 1 00:24:51.906 Error Location: 0x28 00:24:51.906 LBA: 0x0 00:24:51.906 Namespace: 0x0 00:24:51.906 Vendor Log Page: 0x0 00:24:51.906 00:24:51.906 Number of Queues 00:24:51.906 ================ 00:24:51.906 Number of I/O Submission Queues: 128 00:24:51.906 Number of I/O Completion Queues: 128 00:24:51.906 00:24:51.906 ZNS Specific Controller Data 00:24:51.907 ============================ 00:24:51.907 Zone Append Size Limit: 0 00:24:51.907 00:24:51.907 00:24:51.907 Active Namespaces 00:24:51.907 ================= 00:24:51.907 get_feature(0x05) failed 00:24:51.907 Namespace ID:1 00:24:51.907 Command Set Identifier: NVM (00h) 00:24:51.907 Deallocate: Supported 00:24:51.907 Deallocated/Unwritten Error: Not Supported 00:24:51.907 Deallocated Read Value: Unknown 00:24:51.907 Deallocate in Write Zeroes: Not Supported 00:24:51.907 Deallocated Guard Field: 0xFFFF 00:24:51.907 Flush: Supported 00:24:51.907 Reservation: Not Supported 00:24:51.907 Namespace Sharing Capabilities: Multiple Controllers 00:24:51.907 Size (in LBAs): 3125627568 (1490GiB) 00:24:51.907 Capacity (in LBAs): 3125627568 (1490GiB) 00:24:51.907 Utilization (in LBAs): 3125627568 (1490GiB) 00:24:51.907 UUID: 2495ac69-d42e-46f3-a46e-f93e9c8f7c51 00:24:51.907 Thin Provisioning: Not Supported 00:24:51.907 Per-NS Atomic Units: Yes 00:24:51.907 Atomic Boundary Size (Normal): 0 00:24:51.907 Atomic Boundary Size (PFail): 0 00:24:51.907 Atomic Boundary Offset: 0 00:24:51.907 NGUID/EUI64 Never Reused: No 00:24:51.907 ANA group ID: 1 00:24:51.907 Namespace Write Protected: No 00:24:51.907 Number of LBA Formats: 1 00:24:51.907 Current LBA Format: LBA Format #00 00:24:51.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:51.907 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.907 rmmod nvme_tcp 00:24:51.907 rmmod nvme_fabrics 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.907 13:05:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:24:54.443 13:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.979 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.979 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.358 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.358 00:24:58.358 real 0m17.258s 00:24:58.358 user 0m4.410s 00:24:58.358 sys 0m8.695s 00:24:58.358 13:05:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:58.358 13:05:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.358 ************************************ 00:24:58.358 END TEST nvmf_identify_kernel_target 00:24:58.358 ************************************ 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.617 ************************************ 00:24:58.617 START TEST nvmf_auth_host 00:24:58.617 ************************************ 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.617 * Looking for test storage... 00:24:58.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:58.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.617 --rc genhtml_branch_coverage=1 00:24:58.617 --rc genhtml_function_coverage=1 00:24:58.617 --rc genhtml_legend=1 00:24:58.617 --rc geninfo_all_blocks=1 00:24:58.617 --rc geninfo_unexecuted_blocks=1 00:24:58.617 00:24:58.617 ' 00:24:58.617 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:58.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.618 --rc genhtml_branch_coverage=1 00:24:58.618 --rc genhtml_function_coverage=1 00:24:58.618 --rc genhtml_legend=1 00:24:58.618 --rc geninfo_all_blocks=1 00:24:58.618 --rc geninfo_unexecuted_blocks=1 00:24:58.618 00:24:58.618 ' 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:58.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.618 --rc genhtml_branch_coverage=1 00:24:58.618 --rc genhtml_function_coverage=1 00:24:58.618 --rc genhtml_legend=1 00:24:58.618 --rc geninfo_all_blocks=1 00:24:58.618 --rc geninfo_unexecuted_blocks=1 00:24:58.618 00:24:58.618 ' 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:58.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.618 --rc genhtml_branch_coverage=1 00:24:58.618 --rc genhtml_function_coverage=1 00:24:58.618 --rc genhtml_legend=1 00:24:58.618 --rc geninfo_all_blocks=1 00:24:58.618 --rc geninfo_unexecuted_blocks=1 00:24:58.618 00:24:58.618 ' 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.618 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.878 13:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.448 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.448 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:05.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:05.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:05.449 Found net devices under 0000:86:00.0: cvl_0_0 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:05.449 Found net devices under 0000:86:00.1: cvl_0_1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:25:05.449 00:25:05.449 --- 10.0.0.2 ping statistics --- 00:25:05.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.449 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:25:05.449 00:25:05.449 --- 10.0.0.1 ping statistics --- 00:25:05.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.449 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.449 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1343831 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1343831 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1343831 ']' 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.450 13:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2461023866aff96417b28ab45eecc79b 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.zUH 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2461023866aff96417b28ab45eecc79b 0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2461023866aff96417b28ab45eecc79b 0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2461023866aff96417b28ab45eecc79b 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.zUH 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.zUH 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zUH 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5e2407a538a3d69d8244f2280c45ae9315badf8d41f0b90c7105ad789cb978e7 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.b88 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5e2407a538a3d69d8244f2280c45ae9315badf8d41f0b90c7105ad789cb978e7 3 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5e2407a538a3d69d8244f2280c45ae9315badf8d41f0b90c7105ad789cb978e7 3 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5e2407a538a3d69d8244f2280c45ae9315badf8d41f0b90c7105ad789cb978e7 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.b88 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.b88 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.b88 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=64d11d8cbf8305c8a63cf97ce8783fc546e5b5ed75e58be8 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.YvY 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 64d11d8cbf8305c8a63cf97ce8783fc546e5b5ed75e58be8 0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 64d11d8cbf8305c8a63cf97ce8783fc546e5b5ed75e58be8 0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=64d11d8cbf8305c8a63cf97ce8783fc546e5b5ed75e58be8 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.YvY 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.YvY 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YvY 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1c2204f50dba66c6854b929fa9df42df701967a40ccdb72c 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.O5w 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1c2204f50dba66c6854b929fa9df42df701967a40ccdb72c 2 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1c2204f50dba66c6854b929fa9df42df701967a40ccdb72c 2 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1c2204f50dba66c6854b929fa9df42df701967a40ccdb72c 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.O5w 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.O5w 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.O5w 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=50817f64a0e5f8140614191cf9066557 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.TCU 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 50817f64a0e5f8140614191cf9066557 1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 50817f64a0e5f8140614191cf9066557 1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=50817f64a0e5f8140614191cf9066557 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.TCU 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.TCU 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TCU 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.450 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=74eb7f27f34084d4b2b1e600a352c610 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XwL 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 74eb7f27f34084d4b2b1e600a352c610 1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 74eb7f27f34084d4b2b1e600a352c610 1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=74eb7f27f34084d4b2b1e600a352c610 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XwL 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XwL 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XwL 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6363609e2723e2d30f9018956b80e0ae0ef2d7134b3d3030 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.LXZ 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6363609e2723e2d30f9018956b80e0ae0ef2d7134b3d3030 2 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6363609e2723e2d30f9018956b80e0ae0ef2d7134b3d3030 2 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6363609e2723e2d30f9018956b80e0ae0ef2d7134b3d3030 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.LXZ 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.LXZ 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LXZ 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=dd1e1eb21e4e0b5cac05dc07777ddf1a 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ZJs 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key dd1e1eb21e4e0b5cac05dc07777ddf1a 0 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 dd1e1eb21e4e0b5cac05dc07777ddf1a 0 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=dd1e1eb21e4e0b5cac05dc07777ddf1a 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ZJs 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ZJs 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ZJs 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=edf27ea61f80fee2838b8f37370daaa6f2f539e31007a3bdd9bbad17bff090d2 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.vrA 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key edf27ea61f80fee2838b8f37370daaa6f2f539e31007a3bdd9bbad17bff090d2 3 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 edf27ea61f80fee2838b8f37370daaa6f2f539e31007a3bdd9bbad17bff090d2 3 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=edf27ea61f80fee2838b8f37370daaa6f2f539e31007a3bdd9bbad17bff090d2 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.vrA 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.vrA 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vrA 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1343831 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1343831 ']' 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.451 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zUH 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.b88 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b88 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YvY 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.O5w ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O5w 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TCU 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XwL ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XwL 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LXZ 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ZJs ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ZJs 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vrA 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:05.711 13:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:08.995 Waiting for block devices as requested 00:25:08.995 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:08.995 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:08.995 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:09.254 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.254 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.254 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.512 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.512 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.512 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.512 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.771 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.338 No valid GPT data, bailing 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.338 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:10.339 00:25:10.339 Discovery Log Number of Records 2, Generation counter 2 00:25:10.339 =====Discovery Log Entry 0====== 00:25:10.339 trtype: tcp 00:25:10.339 adrfam: ipv4 00:25:10.339 subtype: current discovery subsystem 00:25:10.339 treq: not specified, sq flow control disable supported 00:25:10.339 portid: 1 00:25:10.339 trsvcid: 4420 00:25:10.339 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:10.339 traddr: 10.0.0.1 00:25:10.339 eflags: none 00:25:10.339 sectype: none 00:25:10.339 =====Discovery Log Entry 1====== 00:25:10.339 trtype: tcp 00:25:10.339 adrfam: ipv4 00:25:10.339 subtype: nvme subsystem 00:25:10.339 treq: not specified, sq flow control disable supported 00:25:10.339 portid: 1 00:25:10.339 trsvcid: 4420 00:25:10.339 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:10.339 traddr: 10.0.0.1 00:25:10.339 eflags: none 00:25:10.339 sectype: none 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.339 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:10.597 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.597 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.597 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:10.597 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.597 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.598 nvme0n1 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.598 13:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.857 nvme0n1 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:10.857 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.858 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.118 nvme0n1 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.118 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.377 nvme0n1 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.377 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.636 nvme0n1 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.636 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.637 nvme0n1 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.637 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.898 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.898 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.898 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.898 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.898 13:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.898 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.899 nvme0n1 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.899 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:12.164 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.165 nvme0n1 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.165 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.425 nvme0n1 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.425 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.684 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.685 nvme0n1 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.685 13:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.944 nvme0n1 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.944 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.203 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.204 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.463 nvme0n1 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.463 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.464 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.723 nvme0n1 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.723 13:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.982 nvme0n1 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.982 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.242 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.501 nvme0n1 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.501 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.760 nvme0n1 00:25:14.760 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.760 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.760 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.760 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.760 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.761 13:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.329 nvme0n1 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.329 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.330 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.589 nvme0n1 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.590 13:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 nvme0n1 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.158 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.159 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.159 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.417 nvme0n1 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.417 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.676 13:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.936 nvme0n1 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.936 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 nvme0n1 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.763 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.764 13:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.331 nvme0n1 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.331 13:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.900 nvme0n1 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.900 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.901 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.901 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.468 nvme0n1 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.468 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.728 13:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.296 nvme0n1 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.296 nvme0n1 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.296 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.555 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.556 nvme0n1 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.556 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 nvme0n1 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.815 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 nvme0n1 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.075 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 nvme0n1 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.334 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.335 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 nvme0n1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.594 13:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.853 nvme0n1 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.853 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.854 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.113 nvme0n1 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.113 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.114 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.372 nvme0n1 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.372 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.373 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.632 nvme0n1 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.632 13:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.891 nvme0n1 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.891 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.892 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.151 nvme0n1 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.151 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.409 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 nvme0n1 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.688 13:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.985 nvme0n1 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.985 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 nvme0n1 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.269 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.837 nvme0n1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.837 13:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.096 nvme0n1 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:25.096 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.097 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.664 nvme0n1 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.664 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.665 13:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.923 nvme0n1 00:25:25.923 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.184 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.443 nvme0n1 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.444 13:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.380 nvme0n1 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.380 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.381 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.949 nvme0n1 00:25:27.949 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.949 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.949 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.949 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.949 13:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.518 nvme0n1 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.518 13:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.087 nvme0n1 00:25:29.087 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.087 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.087 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.088 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.656 nvme0n1 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.656 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.915 13:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.915 nvme0n1 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:29.915 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.916 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 nvme0n1 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.174 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.175 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.434 nvme0n1 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.434 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.694 nvme0n1 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.694 13:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.953 nvme0n1 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.953 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.213 nvme0n1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.213 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 nvme0n1 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.473 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.733 nvme0n1 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.733 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.734 13:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.993 nvme0n1 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.993 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 nvme0n1 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.253 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 nvme0n1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.513 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.773 nvme0n1 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.773 13:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.773 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.032 nvme0n1 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.032 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.033 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.292 nvme0n1 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.292 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.551 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 nvme0n1 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 13:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 nvme0n1 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.070 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:34.329 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.330 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 nvme0n1 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.589 13:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.157 nvme0n1 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.157 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.158 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.417 nvme0n1 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.417 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.676 13:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.936 nvme0n1 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ2MTAyMzg2NmFmZjk2NDE3YjI4YWI0NWVlY2M3OWJ+jhGG: 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWUyNDA3YTUzOGEzZDY5ZDgyNDRmMjI4MGM0NWFlOTMxNWJhZGY4ZDQxZjBiOTBjNzEwNWFkNzg5Y2I5NzhlN4lqQ2Q=: 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.936 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.503 nvme0n1 00:25:36.503 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.503 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.503 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.503 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.503 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.504 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.763 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.764 13:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.331 nvme0n1 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.331 13:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.898 nvme0n1 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM2MzYwOWUyNzIzZTJkMzBmOTAxODk1NmI4MGUwYWUwZWYyZDcxMzRiM2QzMDMwv3mZRg==: 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQxZTFlYjIxZTRlMGI1Y2FjMDVkYzA3Nzc3ZGRmMWHhth1N: 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:37.898 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.899 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.466 nvme0n1 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.466 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWRmMjdlYTYxZjgwZmVlMjgzOGI4ZjM3MzcwZGFhYTZmMmY1MzllMzEwMDdhM2JkZDliYmFkMTdiZmYwOTBkMnA6bfk=: 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.726 13:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.294 nvme0n1 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:39.294 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.295 request: 00:25:39.295 { 00:25:39.295 "name": "nvme0", 00:25:39.295 "trtype": "tcp", 00:25:39.295 "traddr": "10.0.0.1", 00:25:39.295 "adrfam": "ipv4", 00:25:39.295 "trsvcid": "4420", 00:25:39.295 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.295 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.295 "prchk_reftag": false, 00:25:39.295 "prchk_guard": false, 00:25:39.295 "hdgst": false, 00:25:39.295 "ddgst": false, 00:25:39.295 "allow_unrecognized_csi": false, 00:25:39.295 "method": "bdev_nvme_attach_controller", 00:25:39.295 "req_id": 1 00:25:39.295 } 00:25:39.295 Got JSON-RPC error response 00:25:39.295 response: 00:25:39.295 { 00:25:39.295 "code": -5, 00:25:39.295 "message": "Input/output error" 00:25:39.295 } 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.295 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.554 request: 00:25:39.554 { 00:25:39.554 "name": "nvme0", 00:25:39.554 "trtype": "tcp", 00:25:39.554 "traddr": "10.0.0.1", 00:25:39.554 "adrfam": "ipv4", 00:25:39.554 "trsvcid": "4420", 00:25:39.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.554 "prchk_reftag": false, 00:25:39.554 "prchk_guard": false, 00:25:39.554 "hdgst": false, 00:25:39.554 "ddgst": false, 00:25:39.554 "dhchap_key": "key2", 00:25:39.554 "allow_unrecognized_csi": false, 00:25:39.554 "method": "bdev_nvme_attach_controller", 00:25:39.554 "req_id": 1 00:25:39.554 } 00:25:39.554 Got JSON-RPC error response 00:25:39.554 response: 00:25:39.554 { 00:25:39.554 "code": -5, 00:25:39.554 "message": "Input/output error" 00:25:39.554 } 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.554 request: 00:25:39.554 { 00:25:39.554 "name": "nvme0", 00:25:39.554 "trtype": "tcp", 00:25:39.554 "traddr": "10.0.0.1", 00:25:39.554 "adrfam": "ipv4", 00:25:39.554 "trsvcid": "4420", 00:25:39.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.554 "prchk_reftag": false, 00:25:39.554 "prchk_guard": false, 00:25:39.554 "hdgst": false, 00:25:39.554 "ddgst": false, 00:25:39.554 "dhchap_key": "key1", 00:25:39.554 "dhchap_ctrlr_key": "ckey2", 00:25:39.554 "allow_unrecognized_csi": false, 00:25:39.554 "method": "bdev_nvme_attach_controller", 00:25:39.554 "req_id": 1 00:25:39.554 } 00:25:39.554 Got JSON-RPC error response 00:25:39.554 response: 00:25:39.554 { 00:25:39.554 "code": -5, 00:25:39.554 "message": "Input/output error" 00:25:39.554 } 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.554 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.555 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.814 nvme0n1 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.814 13:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.814 request: 00:25:39.814 { 00:25:39.814 "name": "nvme0", 00:25:39.814 "dhchap_key": "key1", 00:25:39.814 "dhchap_ctrlr_key": "ckey2", 00:25:39.814 "method": "bdev_nvme_set_keys", 00:25:39.814 "req_id": 1 00:25:39.814 } 00:25:39.814 Got JSON-RPC error response 00:25:39.814 response: 00:25:39.814 { 00:25:39.814 "code": -13, 00:25:39.814 "message": "Permission denied" 00:25:39.814 } 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.814 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.073 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.073 13:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:41.011 13:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjRkMTFkOGNiZjgzMDVjOGE2M2NmOTdjZTg3ODNmYzU0NmU1YjVlZDc1ZTU4YmU4OmoHkw==: 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: ]] 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyMjA0ZjUwZGJhNjZjNjg1NGI5MjlmYTlkZjQyZGY3MDE5NjdhNDBjY2RiNzJj1v76Cw==: 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.947 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.206 nvme0n1 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTA4MTdmNjRhMGU1ZjgxNDA2MTQxOTFjZjkwNjY1NTf8Hcsw: 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: ]] 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlYjdmMjdmMzQwODRkNGIyYjFlNjAwYTM1MmM2MTA6T4iC: 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.206 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.207 request: 00:25:42.207 { 00:25:42.207 "name": "nvme0", 00:25:42.207 "dhchap_key": "key2", 00:25:42.207 "dhchap_ctrlr_key": "ckey1", 00:25:42.207 "method": "bdev_nvme_set_keys", 00:25:42.207 "req_id": 1 00:25:42.207 } 00:25:42.207 Got JSON-RPC error response 00:25:42.207 response: 00:25:42.207 { 00:25:42.207 "code": -13, 00:25:42.207 "message": "Permission denied" 00:25:42.207 } 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:42.207 13:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.584 rmmod nvme_tcp 00:25:43.584 rmmod nvme_fabrics 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1343831 ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1343831 ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343831' 00:25:43.584 killing process with pid 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1343831 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:43.584 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:25:43.585 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.585 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.585 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.585 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.585 13:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:25:46.121 13:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:48.657 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.657 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:50.034 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:50.293 13:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zUH /tmp/spdk.key-null.YvY /tmp/spdk.key-sha256.TCU /tmp/spdk.key-sha384.LXZ /tmp/spdk.key-sha512.vrA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:50.293 13:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:52.919 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:52.919 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.919 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.178 00:25:53.178 real 0m54.522s 00:25:53.178 user 0m48.812s 00:25:53.178 sys 0m12.473s 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.178 ************************************ 00:25:53.178 END TEST nvmf_auth_host 00:25:53.178 ************************************ 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.178 ************************************ 00:25:53.178 START TEST nvmf_digest 00:25:53.178 ************************************ 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.178 * Looking for test storage... 00:25:53.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:53.178 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:53.437 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.438 --rc genhtml_branch_coverage=1 00:25:53.438 --rc genhtml_function_coverage=1 00:25:53.438 --rc genhtml_legend=1 00:25:53.438 --rc geninfo_all_blocks=1 00:25:53.438 --rc geninfo_unexecuted_blocks=1 00:25:53.438 00:25:53.438 ' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.438 --rc genhtml_branch_coverage=1 00:25:53.438 --rc genhtml_function_coverage=1 00:25:53.438 --rc genhtml_legend=1 00:25:53.438 --rc geninfo_all_blocks=1 00:25:53.438 --rc geninfo_unexecuted_blocks=1 00:25:53.438 00:25:53.438 ' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.438 --rc genhtml_branch_coverage=1 00:25:53.438 --rc genhtml_function_coverage=1 00:25:53.438 --rc genhtml_legend=1 00:25:53.438 --rc geninfo_all_blocks=1 00:25:53.438 --rc geninfo_unexecuted_blocks=1 00:25:53.438 00:25:53.438 ' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.438 --rc genhtml_branch_coverage=1 00:25:53.438 --rc genhtml_function_coverage=1 00:25:53.438 --rc genhtml_legend=1 00:25:53.438 --rc geninfo_all_blocks=1 00:25:53.438 --rc geninfo_unexecuted_blocks=1 00:25:53.438 00:25:53.438 ' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.438 13:06:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:00.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.010 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:00.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:00.011 Found net devices under 0000:86:00.0: cvl_0_0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:00.011 Found net devices under 0000:86:00.1: cvl_0_1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:26:00.011 00:26:00.011 --- 10.0.0.2 ping statistics --- 00:26:00.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.011 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:00.011 00:26:00.011 --- 10.0.0.1 ping statistics --- 00:26:00.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.011 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:00.011 ************************************ 00:26:00.011 START TEST nvmf_digest_clean 00:26:00.011 ************************************ 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1358112 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1358112 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1358112 ']' 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.011 [2024-10-15 13:06:19.562640] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:00.011 [2024-10-15 13:06:19.562688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.011 [2024-10-15 13:06:19.636137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.011 [2024-10-15 13:06:19.676838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.011 [2024-10-15 13:06:19.676872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.011 [2024-10-15 13:06:19.676880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.011 [2024-10-15 13:06:19.676886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.011 [2024-10-15 13:06:19.676891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.011 [2024-10-15 13:06:19.677442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:00.011 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.012 null0 00:26:00.012 [2024-10-15 13:06:19.833248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.012 [2024-10-15 13:06:19.857438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1358142 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1358142 /var/tmp/bperf.sock 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1358142 ']' 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.012 13:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.012 [2024-10-15 13:06:19.911261] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:00.012 [2024-10-15 13:06:19.911303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358142 ] 00:26:00.012 [2024-10-15 13:06:19.979218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.012 [2024-10-15 13:06:20.024538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.012 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.579 nvme0n1 00:26:00.579 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:00.579 13:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.579 Running I/O for 2 seconds... 00:26:02.452 25653.00 IOPS, 100.21 MiB/s [2024-10-15T11:06:22.772Z] 25913.00 IOPS, 101.22 MiB/s 00:26:02.453 Latency(us) 00:26:02.453 [2024-10-15T11:06:22.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:02.453 nvme0n1 : 2.00 25928.95 101.28 0.00 0.00 4932.36 2200.14 12170.97 00:26:02.453 [2024-10-15T11:06:22.772Z] =================================================================================================================== 00:26:02.453 [2024-10-15T11:06:22.772Z] Total : 25928.95 101.28 0.00 0.00 4932.36 2200.14 12170.97 00:26:02.453 { 00:26:02.453 "results": [ 00:26:02.453 { 00:26:02.453 "job": "nvme0n1", 00:26:02.453 "core_mask": "0x2", 00:26:02.453 "workload": "randread", 00:26:02.453 "status": "finished", 00:26:02.453 "queue_depth": 128, 00:26:02.453 "io_size": 4096, 00:26:02.453 "runtime": 2.003706, 00:26:02.453 "iops": 25928.953648888608, 00:26:02.453 "mibps": 101.28497519097112, 00:26:02.453 "io_failed": 0, 00:26:02.453 "io_timeout": 0, 00:26:02.453 "avg_latency_us": 4932.360165934334, 00:26:02.453 "min_latency_us": 2200.137142857143, 00:26:02.453 "max_latency_us": 12170.971428571429 00:26:02.453 } 00:26:02.453 ], 00:26:02.453 "core_count": 1 00:26:02.453 } 00:26:02.453 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.453 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.453 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.453 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.453 | select(.opcode=="crc32c") 00:26:02.453 | "\(.module_name) \(.executed)"' 00:26:02.453 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1358142 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1358142 ']' 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1358142 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1358142 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1358142' 00:26:02.712 killing process with pid 1358142 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1358142 00:26:02.712 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.712 00:26:02.712 Latency(us) 00:26:02.712 [2024-10-15T11:06:23.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.712 [2024-10-15T11:06:23.031Z] =================================================================================================================== 00:26:02.712 [2024-10-15T11:06:23.031Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.712 13:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1358142 00:26:02.971 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1358615 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1358615 /var/tmp/bperf.sock 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1358615 ']' 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.972 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.972 [2024-10-15 13:06:23.175398] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:02.972 [2024-10-15 13:06:23.175446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358615 ] 00:26:02.972 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.972 Zero copy mechanism will not be used. 00:26:02.972 [2024-10-15 13:06:23.243175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.972 [2024-10-15 13:06:23.282614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.231 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.231 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:03.231 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.231 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.231 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.490 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.490 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.748 nvme0n1 00:26:03.748 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.748 13:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.748 Zero copy mechanism will not be used. 00:26:03.748 Running I/O for 2 seconds... 00:26:06.065 4637.00 IOPS, 579.62 MiB/s [2024-10-15T11:06:26.384Z] 4622.00 IOPS, 577.75 MiB/s 00:26:06.065 Latency(us) 00:26:06.065 [2024-10-15T11:06:26.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.065 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:06.065 nvme0n1 : 2.00 4625.56 578.20 0.00 0.00 3456.48 729.48 6990.51 00:26:06.065 [2024-10-15T11:06:26.384Z] =================================================================================================================== 00:26:06.065 [2024-10-15T11:06:26.384Z] Total : 4625.56 578.20 0.00 0.00 3456.48 729.48 6990.51 00:26:06.065 { 00:26:06.065 "results": [ 00:26:06.065 { 00:26:06.065 "job": "nvme0n1", 00:26:06.065 "core_mask": "0x2", 00:26:06.065 "workload": "randread", 00:26:06.065 "status": "finished", 00:26:06.065 "queue_depth": 16, 00:26:06.065 "io_size": 131072, 00:26:06.065 "runtime": 2.001918, 00:26:06.065 "iops": 4625.5640840434025, 00:26:06.065 "mibps": 578.1955105054253, 00:26:06.065 "io_failed": 0, 00:26:06.065 "io_timeout": 0, 00:26:06.065 "avg_latency_us": 3456.4761394631287, 00:26:06.065 "min_latency_us": 729.4780952380952, 00:26:06.065 "max_latency_us": 6990.506666666667 00:26:06.065 } 00:26:06.065 ], 00:26:06.065 "core_count": 1 00:26:06.065 } 00:26:06.065 13:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:06.065 13:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:06.065 13:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:06.065 13:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:06.065 | select(.opcode=="crc32c") 00:26:06.065 | "\(.module_name) \(.executed)"' 00:26:06.065 13:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1358615 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1358615 ']' 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1358615 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1358615 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1358615' 00:26:06.065 killing process with pid 1358615 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1358615 00:26:06.065 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.065 00:26:06.065 Latency(us) 00:26:06.065 [2024-10-15T11:06:26.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.065 [2024-10-15T11:06:26.384Z] =================================================================================================================== 00:26:06.065 [2024-10-15T11:06:26.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.065 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1358615 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1359236 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1359236 /var/tmp/bperf.sock 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1359236 ']' 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.325 [2024-10-15 13:06:26.460547] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:06.325 [2024-10-15 13:06:26.460594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359236 ] 00:26:06.325 [2024-10-15 13:06:26.527932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.325 [2024-10-15 13:06:26.570011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.325 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.583 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.583 13:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.841 nvme0n1 00:26:06.841 13:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.841 13:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.100 Running I/O for 2 seconds... 00:26:08.972 28455.00 IOPS, 111.15 MiB/s [2024-10-15T11:06:29.291Z] 28550.00 IOPS, 111.52 MiB/s 00:26:08.972 Latency(us) 00:26:08.972 [2024-10-15T11:06:29.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.972 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:08.972 nvme0n1 : 2.01 28567.56 111.59 0.00 0.00 4476.59 1778.83 10735.42 00:26:08.972 [2024-10-15T11:06:29.291Z] =================================================================================================================== 00:26:08.972 [2024-10-15T11:06:29.291Z] Total : 28567.56 111.59 0.00 0.00 4476.59 1778.83 10735.42 00:26:08.972 { 00:26:08.972 "results": [ 00:26:08.972 { 00:26:08.972 "job": "nvme0n1", 00:26:08.972 "core_mask": "0x2", 00:26:08.972 "workload": "randwrite", 00:26:08.973 "status": "finished", 00:26:08.973 "queue_depth": 128, 00:26:08.973 "io_size": 4096, 00:26:08.973 "runtime": 2.007697, 00:26:08.973 "iops": 28567.557753983794, 00:26:08.973 "mibps": 111.5920224764992, 00:26:08.973 "io_failed": 0, 00:26:08.973 "io_timeout": 0, 00:26:08.973 "avg_latency_us": 4476.589850579723, 00:26:08.973 "min_latency_us": 1778.8342857142857, 00:26:08.973 "max_latency_us": 10735.420952380953 00:26:08.973 } 00:26:08.973 ], 00:26:08.973 "core_count": 1 00:26:08.973 } 00:26:08.973 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:08.973 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:08.973 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:08.973 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:08.973 | select(.opcode=="crc32c") 00:26:08.973 | "\(.module_name) \(.executed)"' 00:26:08.973 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1359236 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1359236 ']' 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1359236 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359236 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359236' 00:26:09.232 killing process with pid 1359236 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1359236 00:26:09.232 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.232 00:26:09.232 Latency(us) 00:26:09.232 [2024-10-15T11:06:29.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.232 [2024-10-15T11:06:29.551Z] =================================================================================================================== 00:26:09.232 [2024-10-15T11:06:29.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.232 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1359236 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1359773 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1359773 /var/tmp/bperf.sock 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1359773 ']' 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:09.491 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.491 [2024-10-15 13:06:29.732968] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:09.491 [2024-10-15 13:06:29.733021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359773 ] 00:26:09.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.491 Zero copy mechanism will not be used. 00:26:09.491 [2024-10-15 13:06:29.802961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.750 [2024-10-15 13:06:29.839822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.750 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.750 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:09.751 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.751 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.751 13:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.010 13:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.010 13:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.268 nvme0n1 00:26:10.268 13:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.269 13:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.527 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.527 Zero copy mechanism will not be used. 00:26:10.527 Running I/O for 2 seconds... 00:26:12.402 6481.00 IOPS, 810.12 MiB/s [2024-10-15T11:06:32.721Z] 6458.50 IOPS, 807.31 MiB/s 00:26:12.402 Latency(us) 00:26:12.402 [2024-10-15T11:06:32.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.402 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.402 nvme0n1 : 2.00 6453.49 806.69 0.00 0.00 2474.42 1934.87 9299.87 00:26:12.402 [2024-10-15T11:06:32.721Z] =================================================================================================================== 00:26:12.402 [2024-10-15T11:06:32.721Z] Total : 6453.49 806.69 0.00 0.00 2474.42 1934.87 9299.87 00:26:12.402 { 00:26:12.402 "results": [ 00:26:12.402 { 00:26:12.402 "job": "nvme0n1", 00:26:12.402 "core_mask": "0x2", 00:26:12.402 "workload": "randwrite", 00:26:12.402 "status": "finished", 00:26:12.402 "queue_depth": 16, 00:26:12.402 "io_size": 131072, 00:26:12.402 "runtime": 2.003877, 00:26:12.402 "iops": 6453.489909809834, 00:26:12.402 "mibps": 806.6862387262292, 00:26:12.402 "io_failed": 0, 00:26:12.402 "io_timeout": 0, 00:26:12.402 "avg_latency_us": 2474.4179553120352, 00:26:12.402 "min_latency_us": 1934.872380952381, 00:26:12.402 "max_latency_us": 9299.870476190476 00:26:12.402 } 00:26:12.402 ], 00:26:12.402 "core_count": 1 00:26:12.402 } 00:26:12.402 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.402 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.402 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.403 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.403 | select(.opcode=="crc32c") 00:26:12.403 | "\(.module_name) \(.executed)"' 00:26:12.403 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1359773 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1359773 ']' 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1359773 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359773 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359773' 00:26:12.662 killing process with pid 1359773 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1359773 00:26:12.662 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.662 00:26:12.662 Latency(us) 00:26:12.662 [2024-10-15T11:06:32.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.662 [2024-10-15T11:06:32.981Z] =================================================================================================================== 00:26:12.662 [2024-10-15T11:06:32.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.662 13:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1359773 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1358112 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1358112 ']' 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1358112 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1358112 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1358112' 00:26:12.921 killing process with pid 1358112 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1358112 00:26:12.921 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1358112 00:26:13.181 00:26:13.181 real 0m13.761s 00:26:13.181 user 0m26.242s 00:26:13.181 sys 0m4.509s 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.181 ************************************ 00:26:13.181 END TEST nvmf_digest_clean 00:26:13.181 ************************************ 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.181 ************************************ 00:26:13.181 START TEST nvmf_digest_error 00:26:13.181 ************************************ 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1360297 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1360297 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1360297 ']' 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.181 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.181 [2024-10-15 13:06:33.390505] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:13.181 [2024-10-15 13:06:33.390548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.181 [2024-10-15 13:06:33.463918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.441 [2024-10-15 13:06:33.504728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.441 [2024-10-15 13:06:33.504764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.441 [2024-10-15 13:06:33.504774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.441 [2024-10-15 13:06:33.504780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.441 [2024-10-15 13:06:33.504785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.441 [2024-10-15 13:06:33.505366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 [2024-10-15 13:06:33.581832] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 null0 00:26:13.441 [2024-10-15 13:06:33.672989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.441 [2024-10-15 13:06:33.697183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1360512 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1360512 /var/tmp/bperf.sock 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1360512 ']' 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.441 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 [2024-10-15 13:06:33.749824] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:13.441 [2024-10-15 13:06:33.749865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360512 ] 00:26:13.700 [2024-10-15 13:06:33.816841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.700 [2024-10-15 13:06:33.856979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.700 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.700 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:13.700 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.700 13:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.958 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:13.958 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.958 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.958 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.958 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.959 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.218 nvme0n1 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.218 13:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.477 Running I/O for 2 seconds... 00:26:14.477 [2024-10-15 13:06:34.579087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.579119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.579129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.589382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.589426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.602459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.602481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.602489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.610646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.610668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.610677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.621880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.621902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.621910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.632513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.632534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.632543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.641520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.477 [2024-10-15 13:06:34.641540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.477 [2024-10-15 13:06:34.641548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.477 [2024-10-15 13:06:34.652326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.652347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.652355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.664431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.664452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.664461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.676833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.676854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.676862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.688181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.688202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.688219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.697185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.697205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.697213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.708964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.708987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.708995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.719223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.719253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.727778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.727801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.727809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.737548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.737571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.737578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.746976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.746998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.747006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.756485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.756507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.756516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.765643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.765666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.765674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.775469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.775490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.775498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.784408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.784430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.784438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.478 [2024-10-15 13:06:34.793520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.478 [2024-10-15 13:06:34.793541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.478 [2024-10-15 13:06:34.793549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.737 [2024-10-15 13:06:34.802673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.737 [2024-10-15 13:06:34.802696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.737 [2024-10-15 13:06:34.802705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.813358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.813380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.823832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.823853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.823862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.832312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.832334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.832342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.842131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.842162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.850881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.850902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.850914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.859944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.859965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.859974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.869583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.869610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.869618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.878351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.878373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.887918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.887940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.887948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.900370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.900391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.900399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.912122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.912143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.920750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.920771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.920779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.929841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.929863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.929871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.939874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.939901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.939909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.950748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.950772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.950780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.959474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.959496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.959504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.969421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.969442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.979133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.979154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.979163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.987535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.987557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.987565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:34.999291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:34.999312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:34.999321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.009750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.009772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.009780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.019703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.019724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.019732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.028207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.028228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.028237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.039419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.039441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.039449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.047408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.047429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.047437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.738 [2024-10-15 13:06:35.057820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.738 [2024-10-15 13:06:35.057842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.738 [2024-10-15 13:06:35.057850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.070385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.070406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.070415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.078488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.078510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.078518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.088893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.088915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.088924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.101195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.101216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.101224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.114003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.114026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.114039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.125266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.125288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.125296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.133147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.133168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.144080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.144101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.144109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.153538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.153558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.161560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.161580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.161589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.170930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.170950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.170958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.180732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.180752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.180760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.191019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.191042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.191051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.202116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.202142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.202150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.211452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.211475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.211483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.220398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.220419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.220428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.231729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.231759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.242268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.998 [2024-10-15 13:06:35.242289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.998 [2024-10-15 13:06:35.242298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.998 [2024-10-15 13:06:35.251005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.251026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.251034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.262317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.262337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.262345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.271223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.271243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.271252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.282826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.282847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.282856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.294616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.294637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.294645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.303947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.303968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.999 [2024-10-15 13:06:35.315947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:14.999 [2024-10-15 13:06:35.315969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.999 [2024-10-15 13:06:35.315978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.327572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.327594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.327608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.336198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.336219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.336228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.346409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.346429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.355445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.355466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.355474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.364867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.364897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.364905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.374610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.374630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.374641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.384679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.384707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.397513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.397543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.407542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.407562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.407570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.258 [2024-10-15 13:06:35.415835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.258 [2024-10-15 13:06:35.415857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.258 [2024-10-15 13:06:35.415865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.428745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.428765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.428774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.439184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.439205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.439212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.451037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.451057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.451065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.458981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.459001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.459009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.470153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.470173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.470181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.483101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.483122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.483130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.491528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.491549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.491557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.503663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.503693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.515114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.515136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.515145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.523861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.523882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.523890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.535505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.535526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.535534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.545265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.545285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.545293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.554815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.554835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 24981.00 IOPS, 97.58 MiB/s [2024-10-15T11:06:35.578Z] [2024-10-15 13:06:35.564613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.564632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.564639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.259 [2024-10-15 13:06:35.573350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.259 [2024-10-15 13:06:35.573371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.259 [2024-10-15 13:06:35.573379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.582552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.582575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.582584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.591873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.591894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.591903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.601149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.601169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.601177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.611547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.611568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.611576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.621424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.621445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.621453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.628897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.628918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.628926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.640569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.640594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.640607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.651327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.651347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.651355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.659849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.659870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.659878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.671640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.671661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.671669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.683973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.683994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.684005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.696252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.696273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.696281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.708365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.708385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.719933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.719954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.719962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.728616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.728637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.728645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.739853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.739874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.739882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.748772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.748801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.761106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.761127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.761136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.769419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.769447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.781360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.781381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.781388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.793580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.793605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.793613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.801807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.801827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.813339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.813361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.813369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.826119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.826140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.826152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.520 [2024-10-15 13:06:35.838435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.520 [2024-10-15 13:06:35.838456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.520 [2024-10-15 13:06:35.838465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.850900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.850923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.850931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.859087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.859107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.859116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.871650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.871671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.883758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.883779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.891852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.891873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.891881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.901108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.901129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.901137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.910375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.910395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.910404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.919759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.919784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.919793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.929212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.929232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.929240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.939721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.780 [2024-10-15 13:06:35.939742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.780 [2024-10-15 13:06:35.939750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.780 [2024-10-15 13:06:35.948782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:35.948803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:35.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:35.956937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:35.956958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:35.956966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:35.969281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:35.969302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:35.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:35.981706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:35.981726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:35.981734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:35.993464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:35.993484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:35.993492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.002586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.002612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.002620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.012341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.012364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.022705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.022727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.022736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.032622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.032643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.032651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.041222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.041243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.041251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.050154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.050174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.050183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.059154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.059175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.059183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.069505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.069526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.069535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.077223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.077244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.087335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.087367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.781 [2024-10-15 13:06:36.095721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:15.781 [2024-10-15 13:06:36.095742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.781 [2024-10-15 13:06:36.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.106259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.106280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.106288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.114723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.114744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.114752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.125050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.125072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.125080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.134636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.134657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.143043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.154056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.154084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.162422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.162445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.162453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.174154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.174177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.185049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.185070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.185078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.194405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.194427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.194436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.202871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.202892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.202900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.212451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.212473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.212481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.221768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.221790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.221798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.231389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.231413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.231423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.239741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.239770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.249906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.249927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.249941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.261353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.261374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.261383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.268987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.269007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.269016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.279881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.279902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.279910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.041 [2024-10-15 13:06:36.289011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.041 [2024-10-15 13:06:36.289032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.041 [2024-10-15 13:06:36.289041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.298724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.298745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.298753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.309333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.309354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.309363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.317064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.317085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.317094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.327328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.327350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.327358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.336314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.336339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.336348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.345620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.345643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.345651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.042 [2024-10-15 13:06:36.356203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.042 [2024-10-15 13:06:36.356223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.042 [2024-10-15 13:06:36.356232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.301 [2024-10-15 13:06:36.365561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.301 [2024-10-15 13:06:36.365583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.301 [2024-10-15 13:06:36.365592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.301 [2024-10-15 13:06:36.374649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.301 [2024-10-15 13:06:36.374671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.301 [2024-10-15 13:06:36.374679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.301 [2024-10-15 13:06:36.386471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.301 [2024-10-15 13:06:36.386494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.301 [2024-10-15 13:06:36.386502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.301 [2024-10-15 13:06:36.394842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.301 [2024-10-15 13:06:36.394864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.301 [2024-10-15 13:06:36.394872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.301 [2024-10-15 13:06:36.404599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.404626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.404634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.414333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.414355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.414363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.422834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.422856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.422864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.432490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.432511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.432520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.442092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.442114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.442122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.452109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.452131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.452139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.462185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.462206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.462215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.469846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.469867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.469875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.480131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.480152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.480160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.490614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.490635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.490643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.498808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.498829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.498840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.509324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.509345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.509353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.518880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.518901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.518909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.527162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.527182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.527190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.537080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.537101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.537109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.547918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.547939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.547947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.555699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.555726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.555735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 [2024-10-15 13:06:36.565121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca2ab0) 00:26:16.302 [2024-10-15 13:06:36.565142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.302 [2024-10-15 13:06:36.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.302 25385.00 IOPS, 99.16 MiB/s 00:26:16.302 Latency(us) 00:26:16.302 [2024-10-15T11:06:36.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.302 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.302 nvme0n1 : 2.00 25384.65 99.16 0.00 0.00 5035.79 2621.44 18350.08 00:26:16.302 [2024-10-15T11:06:36.621Z] =================================================================================================================== 00:26:16.302 [2024-10-15T11:06:36.621Z] Total : 25384.65 99.16 0.00 0.00 5035.79 2621.44 18350.08 00:26:16.302 { 00:26:16.302 "results": [ 00:26:16.302 { 00:26:16.302 "job": "nvme0n1", 00:26:16.302 "core_mask": "0x2", 00:26:16.302 "workload": "randread", 00:26:16.302 "status": "finished", 00:26:16.302 "queue_depth": 128, 00:26:16.302 "io_size": 4096, 00:26:16.302 "runtime": 2.0031, 00:26:16.302 "iops": 25384.653786630723, 00:26:16.302 "mibps": 99.15880385402626, 00:26:16.302 "io_failed": 0, 00:26:16.302 "io_timeout": 0, 00:26:16.302 "avg_latency_us": 5035.786901615272, 00:26:16.302 "min_latency_us": 2621.44, 00:26:16.302 "max_latency_us": 18350.08 00:26:16.302 } 00:26:16.302 ], 00:26:16.302 "core_count": 1 00:26:16.302 } 00:26:16.302 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.302 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.302 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.302 | .driver_specific 00:26:16.302 | .nvme_error 00:26:16.302 | .status_code 00:26:16.302 | .command_transient_transport_error' 00:26:16.302 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1360512 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1360512 ']' 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1360512 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360512 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360512' 00:26:16.561 killing process with pid 1360512 00:26:16.561 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1360512 00:26:16.561 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.561 00:26:16.561 Latency(us) 00:26:16.561 [2024-10-15T11:06:36.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.561 [2024-10-15T11:06:36.880Z] =================================================================================================================== 00:26:16.561 [2024-10-15T11:06:36.880Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.562 13:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1360512 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1360987 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1360987 /var/tmp/bperf.sock 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1360987 ']' 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.820 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.820 [2024-10-15 13:06:37.054883] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:16.820 [2024-10-15 13:06:37.054940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360987 ] 00:26:16.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.820 Zero copy mechanism will not be used. 00:26:16.820 [2024-10-15 13:06:37.122640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.080 [2024-10-15 13:06:37.164718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.080 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.080 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:17.080 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.080 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.339 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.601 nvme0n1 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.601 13:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.601 Zero copy mechanism will not be used. 00:26:17.601 Running I/O for 2 seconds... 00:26:17.601 [2024-10-15 13:06:37.884440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.884474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.884484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.889877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.889903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.889912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.895538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.895561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.895570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.900947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.900969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.900977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.906634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.906655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.906664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.911793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.911816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.911824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.601 [2024-10-15 13:06:37.919068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.601 [2024-10-15 13:06:37.919092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-15 13:06:37.919102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.924908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.924931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.924941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.930973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.930995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.931009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.936504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.936532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.936540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.941554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.941577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.941585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.947427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.947450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.947458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.952947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.952970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.952979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.958902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.958926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.958935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.862 [2024-10-15 13:06:37.965822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.862 [2024-10-15 13:06:37.965845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.862 [2024-10-15 13:06:37.965854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.971435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.971457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.971465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.976836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.976858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.976866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.982200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.982226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.982234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.987387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.987410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.987419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.992615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.992636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.992644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:37.997680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:37.997702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:37.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.002948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.002969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.002977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.007728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.007753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.012846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.012868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.012876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.017917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.017939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.017948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.023175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.023197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.023205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.028530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.028553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.028560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.033870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.033891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.033900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.039128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.039149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.039157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.044515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.044536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.044545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.050118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.050139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.050147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.055698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.055719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.055728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.061353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.061376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.061384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.066437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.066467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.071669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.071690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.071704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.077097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.077117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.077125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.082288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.082310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.082318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.087808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.087830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.087838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.093243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.093272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.098636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.098657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.098665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.104089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.104110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.109436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.109457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.109465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.114659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.863 [2024-10-15 13:06:38.114680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.863 [2024-10-15 13:06:38.114689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.863 [2024-10-15 13:06:38.117517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.117542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.117550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.122809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.122830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.122839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.127633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.127655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.127663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.132799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.132820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.132829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.138044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.138065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.138073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.143096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.143117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.143125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.148376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.148398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.148406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.153920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.153941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.153950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.160035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.160057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.160065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.165074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.165096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.165104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.170276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.170306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.175578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.175606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.175615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.864 [2024-10-15 13:06:38.180706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:17.864 [2024-10-15 13:06:38.180728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.864 [2024-10-15 13:06:38.180737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.186005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.186026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.186034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.191229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.191250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.191259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.196621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.196650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.202008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.202030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.202038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.207499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.207524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.207532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.212976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.212999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.213007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.218361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.218382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.218390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.223654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.223677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.223685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.229033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.229055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.229063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.234303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.234324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.234332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.239731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.239752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.239760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.245033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.245054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.245062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.250351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.250380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.255804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.255825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.261269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.261290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.261298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.266742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.266763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.266771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.272225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.272246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.272254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.277749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.125 [2024-10-15 13:06:38.277771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.125 [2024-10-15 13:06:38.277779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.125 [2024-10-15 13:06:38.283146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.283168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.283176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.288449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.288471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.288481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.293824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.293846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.293854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.299113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.299134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.299146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.304389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.304411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.304418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.309725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.309747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.309755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.315153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.315175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.315183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.320543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.320565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.320573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.325995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.326017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.326025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.331380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.331400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.331408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.336745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.336766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.336774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.342070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.342091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.342099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.347241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.347266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.347274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.352464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.352493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.357796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.357817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.357825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.363165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.363187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.363194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.368652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.368674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.368682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.374037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.374058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.374066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.379445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.379466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.379474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.384844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.384865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.384873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.390317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.390339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.395654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.395675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.395684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.401049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.401071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.401079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.406432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.406453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.406462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.411743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.411764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.417124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.417154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.422512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.422534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.126 [2024-10-15 13:06:38.422542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.126 [2024-10-15 13:06:38.427869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.126 [2024-10-15 13:06:38.427891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.127 [2024-10-15 13:06:38.427900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.127 [2024-10-15 13:06:38.433231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.127 [2024-10-15 13:06:38.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.127 [2024-10-15 13:06:38.433264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.127 [2024-10-15 13:06:38.438666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.127 [2024-10-15 13:06:38.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.127 [2024-10-15 13:06:38.438699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.127 [2024-10-15 13:06:38.444295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.127 [2024-10-15 13:06:38.444315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.127 [2024-10-15 13:06:38.444324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.449645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.449667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.449676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.454873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.454894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.454903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.460043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.460066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.460074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.465256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.465277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.470451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.470473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.470481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.475726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.475747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.475755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.481143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.481165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.481173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.486627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.486648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.486657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.492201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.492223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-10-15 13:06:38.492233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.387 [2024-10-15 13:06:38.497934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.387 [2024-10-15 13:06:38.497955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.497963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.503406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.503427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.503435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.508772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.508793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.508801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.514144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.514166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.514174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.519478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.519499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.519507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.524709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.524730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.524738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.529797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.529818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.529829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.535057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.535079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.535087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.540360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.540382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.540390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.545708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.545730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.545738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.551468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.551489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.551497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.556923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.556944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.556952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.562597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.562625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.562633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.568160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.568181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.573528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.573549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.573557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.578881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.578906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.578914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.584532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.584554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.584561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.591290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.591312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.598846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.598868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.598876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.605435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.605457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.605465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.611879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.611901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.611910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.619027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.619050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.619058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.625462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.625485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.631824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.631846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.631855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.638226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.638247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.638256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.644266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.644288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.644296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.650148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.650169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.650177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.655424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.655445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.655453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.660780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.388 [2024-10-15 13:06:38.660801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-10-15 13:06:38.660809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.388 [2024-10-15 13:06:38.666152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.666173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.666181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.671556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.671578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.671586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.676758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.676778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.676786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.681886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.681908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.681920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.687039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.687061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.687069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.692215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.692237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.692247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.697259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.697281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.697289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.702419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.702441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.702449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.389 [2024-10-15 13:06:38.707717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.389 [2024-10-15 13:06:38.707740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-10-15 13:06:38.707748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.713069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.713092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.713101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.718481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.718504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.718512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.723870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.723891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.723900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.729195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.729228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.734458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.734480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.734488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.739768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.739789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.739799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.745046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.745066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.745076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.750115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.750137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.750146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.755055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.755085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.760043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.760065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.649 [2024-10-15 13:06:38.764953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.649 [2024-10-15 13:06:38.764974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.649 [2024-10-15 13:06:38.764982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.769951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.769973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.769982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.775101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.775123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.775131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.780248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.780269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.785447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.785476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.790622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.790644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.790653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.795788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.795817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.800958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.800979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.800988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.806118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.806139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.806148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.811271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.811292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.816448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.816470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.816482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.821597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.821627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.821635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.826724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.826746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.826754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.831861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.831883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.831891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.836958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.836979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.841788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.841809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.841817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.846961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.846981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.846990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.852111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.852132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.852140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.857289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.857311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.857318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.862443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.862464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.862472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.867595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.867621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.867629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.872754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.872776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 5718.00 IOPS, 714.75 MiB/s [2024-10-15T11:06:38.969Z] [2024-10-15 13:06:38.879074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.879095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.879103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.884245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.889462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.889484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.889491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.894657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.894678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.894686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.899828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.899849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.899858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.905043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.905065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.905078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.910665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.910687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.910695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.916434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.922956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.922979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.650 [2024-10-15 13:06:38.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.650 [2024-10-15 13:06:38.930697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.650 [2024-10-15 13:06:38.930720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.930729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.937585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.937612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.937620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.944306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.944337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.951644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.951666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.951676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.957746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.957768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.957777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.962972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.962998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.963006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.651 [2024-10-15 13:06:38.968272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.651 [2024-10-15 13:06:38.968293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.651 [2024-10-15 13:06:38.968302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.973694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.973717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.973726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.978877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.978898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.978909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.984135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.984156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.984165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.989324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.989345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.989353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.994560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.994582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.994591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:38.999884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:38.999906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.910 [2024-10-15 13:06:38.999914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.910 [2024-10-15 13:06:39.005059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.910 [2024-10-15 13:06:39.005079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.005088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.010292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.010314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.010322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.015616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.015639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.020846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.020868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.020877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.026107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.026130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.026138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.029666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.029696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.035136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.035159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.035169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.042886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.042911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.042919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.049642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.049665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.049673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.056904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.056929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.056942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.064075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.064098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.064106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.071786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.071810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.071820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.079407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.079430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.079439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.086594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.086624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.086634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.092874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.092897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.092907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.097460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.097483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.097492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.102344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.102366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.102374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.107370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.107401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.112310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.112332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.112341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.117287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.117308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.117317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.122284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.122306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.122315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.127855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.127877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.127885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.133060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.133081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.133089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.138226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.138247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.138256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.143351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.143373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.143382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.148489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.148510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.148518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.153673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.153695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.153708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.911 [2024-10-15 13:06:39.158848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.911 [2024-10-15 13:06:39.158869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.911 [2024-10-15 13:06:39.158878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.164014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.164037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.164046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.169210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.169239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.174444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.174465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.174474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.179651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.179676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.179684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.184744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.184767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.184774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.189827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.189849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.189858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.194945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.194967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.194976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.200137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.200162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.205238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.205260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.205269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.210408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.210430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.210438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.215636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.215666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.220784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.220806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.220814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.225915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.225937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.225945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.912 [2024-10-15 13:06:39.231126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:18.912 [2024-10-15 13:06:39.231148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.912 [2024-10-15 13:06:39.231156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.236342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.236364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.236373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.241534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.241556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.241565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.246706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.246729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.246737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.251855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.251877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.251885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.257066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.257089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.257098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.262204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.262226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.262234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.267324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.267345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.272508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.277745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.277765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.277774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.282829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.282859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.288042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.288064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.288078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.293195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.293217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.293226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.298359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.298382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.298390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.303568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.303590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.308706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.308728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.308735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.313905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.313929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.313939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.319165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.319187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.319196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.324370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.324393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.324402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.329611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.329633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.173 [2024-10-15 13:06:39.329641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.173 [2024-10-15 13:06:39.334892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.173 [2024-10-15 13:06:39.334919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.334929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.340315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.340337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.340346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.345689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.345711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.345721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.351075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.351098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.351108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.356435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.356457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.356468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.361774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.361805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.367160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.367182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.367191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.372475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.372498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.372508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.377846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.377869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.377877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.383517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.383541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.383550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.388555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.388577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.388586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.393495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.393517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.393526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.398530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.398552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.403536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.403559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.408788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.408810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.408819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.414010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.414033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.414042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.419244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.419265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.419274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.424459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.424486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.424494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.429635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.429656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.429664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.434774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.434795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.434804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.439862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.439884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.439892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.445027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.445050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.445059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.451273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.451304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.456674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.456696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.456704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.461858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.461880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.461888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.467033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.467063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.472217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.472238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.472246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.174 [2024-10-15 13:06:39.477406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.174 [2024-10-15 13:06:39.477427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.174 [2024-10-15 13:06:39.477435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.175 [2024-10-15 13:06:39.483038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.175 [2024-10-15 13:06:39.483060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.175 [2024-10-15 13:06:39.483068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.175 [2024-10-15 13:06:39.489736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.175 [2024-10-15 13:06:39.489758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.175 [2024-10-15 13:06:39.489767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.496934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.496958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.496966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.503067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.503090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.503099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.509292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.509314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.515503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.515526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.515536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.522017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.522040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.522053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.528080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.528103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.528113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.534116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.534138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.534147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.540310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.540332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.540341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.546669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.546692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.546702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.552784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.552805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.558515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.558537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.558545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.565431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.565453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.565462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.572946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.572967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.572977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.579935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.579961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.579972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.587857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.587878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.587887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.595795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.435 [2024-10-15 13:06:39.595818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.435 [2024-10-15 13:06:39.595827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.435 [2024-10-15 13:06:39.603217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.603239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.603248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.609836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.616533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.616555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.616564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.621951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.621972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.621981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.627261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.627282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.627291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.632858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.632888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.638163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.638186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.638194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.643418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.643439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.643447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.648575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.648597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.648610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.653859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.653881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.653890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.659098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.659119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.659127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.664309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.664331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.664339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.669442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.669464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.669472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.674673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.674694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.674703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.679887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.679908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.685066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.685096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.690231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.690253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.690260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.695381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.695402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.695410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.700574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.700610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.705746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.705767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.710969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.710991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.716124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.716156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.721255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.721276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.721284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.726372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.726398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.731485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.731506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.731514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.736588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.736616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.736624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.741758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.741778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.436 [2024-10-15 13:06:39.741786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.436 [2024-10-15 13:06:39.746859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.436 [2024-10-15 13:06:39.746880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.437 [2024-10-15 13:06:39.746888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.437 [2024-10-15 13:06:39.752071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.437 [2024-10-15 13:06:39.752092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.437 [2024-10-15 13:06:39.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.698 [2024-10-15 13:06:39.757270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.698 [2024-10-15 13:06:39.757293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-10-15 13:06:39.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.698 [2024-10-15 13:06:39.762539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.698 [2024-10-15 13:06:39.762560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-10-15 13:06:39.762569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.698 [2024-10-15 13:06:39.767734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.767756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.767768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.772898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.772920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.772928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.778049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.783283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.783304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.783311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.788493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.788523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.793684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.793705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.793714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.798859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.798881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.798890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.804004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.804025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.809130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.809152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.809161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.814306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.814332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.814340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.819456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.819478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.819486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.824510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.824531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.824539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.829645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.829666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.834834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.834855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.834863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.839966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.839988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.839997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.845190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.845211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.845220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.850564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.850586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.850594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.856165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.856187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.856195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.861689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.861710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.861718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.867277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.867297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.867305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.699 [2024-10-15 13:06:39.872763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.872784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.872792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.699 5670.00 IOPS, 708.75 MiB/s [2024-10-15T11:06:40.018Z] [2024-10-15 13:06:39.879119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1468460) 00:26:19.699 [2024-10-15 13:06:39.879141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.699 [2024-10-15 13:06:39.879149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.699 00:26:19.699 Latency(us) 00:26:19.699 [2024-10-15T11:06:40.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.699 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.699 nvme0n1 : 2.00 5667.73 708.47 0.00 0.00 2819.70 631.95 10548.18 00:26:19.699 [2024-10-15T11:06:40.018Z] =================================================================================================================== 00:26:19.699 [2024-10-15T11:06:40.018Z] Total : 5667.73 708.47 0.00 0.00 2819.70 631.95 10548.18 00:26:19.699 { 00:26:19.699 "results": [ 00:26:19.699 { 00:26:19.699 "job": "nvme0n1", 00:26:19.699 "core_mask": "0x2", 00:26:19.699 "workload": "randread", 00:26:19.699 "status": "finished", 00:26:19.699 "queue_depth": 16, 00:26:19.699 "io_size": 131072, 00:26:19.699 "runtime": 2.003624, 00:26:19.699 "iops": 5667.730073107529, 00:26:19.699 "mibps": 708.4662591384412, 00:26:19.699 "io_failed": 0, 00:26:19.699 "io_timeout": 0, 00:26:19.699 "avg_latency_us": 2819.704633757695, 00:26:19.699 "min_latency_us": 631.9542857142857, 00:26:19.699 "max_latency_us": 10548.175238095238 00:26:19.699 } 00:26:19.699 ], 00:26:19.699 "core_count": 1 00:26:19.699 } 00:26:19.699 13:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.699 13:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.699 13:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.699 | .driver_specific 00:26:19.699 | .nvme_error 00:26:19.699 | .status_code 00:26:19.699 | .command_transient_transport_error' 00:26:19.699 13:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1360987 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1360987 ']' 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1360987 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360987 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360987' 00:26:19.958 killing process with pid 1360987 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1360987 00:26:19.958 Received shutdown signal, test time was about 2.000000 seconds 00:26:19.958 00:26:19.958 Latency(us) 00:26:19.958 [2024-10-15T11:06:40.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.958 [2024-10-15T11:06:40.277Z] =================================================================================================================== 00:26:19.958 [2024-10-15T11:06:40.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:19.958 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1360987 00:26:20.217 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:20.217 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1361458 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1361458 /var/tmp/bperf.sock 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1361458 ']' 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.218 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.218 [2024-10-15 13:06:40.366288] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:20.218 [2024-10-15 13:06:40.366336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361458 ] 00:26:20.218 [2024-10-15 13:06:40.434894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.218 [2024-10-15 13:06:40.472150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.477 13:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.045 nvme0n1 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.045 13:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.045 Running I/O for 2 seconds... 00:26:21.045 [2024-10-15 13:06:41.324790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ee5c8 00:26:21.045 [2024-10-15 13:06:41.325551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.045 [2024-10-15 13:06:41.325582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.045 [2024-10-15 13:06:41.333402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e23b8 00:26:21.045 [2024-10-15 13:06:41.334050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.045 [2024-10-15 13:06:41.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.045 [2024-10-15 13:06:41.345220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f6458 00:26:21.045 [2024-10-15 13:06:41.346684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.045 [2024-10-15 13:06:41.346705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.045 [2024-10-15 13:06:41.351733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eff18 00:26:21.045 [2024-10-15 13:06:41.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.045 [2024-10-15 13:06:41.352402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.045 [2024-10-15 13:06:41.361331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e7818 00:26:21.045 [2024-10-15 13:06:41.361765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.045 [2024-10-15 13:06:41.361785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.373072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fcdd0 00:26:21.305 [2024-10-15 13:06:41.374608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.374628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.379848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fe720 00:26:21.305 [2024-10-15 13:06:41.380617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.380637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.391289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fdeb0 00:26:21.305 [2024-10-15 13:06:41.392665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.392683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.399467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ed4e8 00:26:21.305 [2024-10-15 13:06:41.400257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.409075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f8e88 00:26:21.305 [2024-10-15 13:06:41.410203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.410222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.418391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fdeb0 00:26:21.305 [2024-10-15 13:06:41.419058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.419078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.426877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7970 00:26:21.305 [2024-10-15 13:06:41.428117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.428137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.436323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f1430 00:26:21.305 [2024-10-15 13:06:41.437447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.437469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.445613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e4de8 00:26:21.305 [2024-10-15 13:06:41.446277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.446296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.455941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ee190 00:26:21.305 [2024-10-15 13:06:41.457413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.457432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.462244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5658 00:26:21.305 [2024-10-15 13:06:41.462935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.462954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.471592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eb328 00:26:21.305 [2024-10-15 13:06:41.472025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.472045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.480953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9f68 00:26:21.305 [2024-10-15 13:06:41.481490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.481509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.491287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ecc78 00:26:21.305 [2024-10-15 13:06:41.492650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.492668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.500743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9b30 00:26:21.305 [2024-10-15 13:06:41.502202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.502221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.507369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9f68 00:26:21.305 [2024-10-15 13:06:41.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.508132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.516816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eb328 00:26:21.305 [2024-10-15 13:06:41.517716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.526077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f3e60 00:26:21.305 [2024-10-15 13:06:41.526517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.526536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.536340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f96f8 00:26:21.305 [2024-10-15 13:06:41.537466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.537485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.544758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f8e88 00:26:21.305 [2024-10-15 13:06:41.545773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.545792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.553116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fa3a0 00:26:21.305 [2024-10-15 13:06:41.553999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.554018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.562375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7da8 00:26:21.305 [2024-10-15 13:06:41.563276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.305 [2024-10-15 13:06:41.563296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:21.305 [2024-10-15 13:06:41.570831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:21.305 [2024-10-15 13:06:41.571625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.571643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.306 [2024-10-15 13:06:41.581205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e27f0 00:26:21.306 [2024-10-15 13:06:41.582482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.582501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:21.306 [2024-10-15 13:06:41.590885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5220 00:26:21.306 [2024-10-15 13:06:41.592242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.592262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:21.306 [2024-10-15 13:06:41.600381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eea00 00:26:21.306 [2024-10-15 13:06:41.601903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.306 [2024-10-15 13:06:41.606815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ebb98 00:26:21.306 [2024-10-15 13:06:41.607463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.607481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.306 [2024-10-15 13:06:41.617233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ecc78 00:26:21.306 [2024-10-15 13:06:41.618336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.306 [2024-10-15 13:06:41.618355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.565 [2024-10-15 13:06:41.626365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e84c0 00:26:21.565 [2024-10-15 13:06:41.627071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.565 [2024-10-15 13:06:41.627091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.634927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e2c28 00:26:21.566 [2024-10-15 13:06:41.636198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.636217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.642648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166dece0 00:26:21.566 [2024-10-15 13:06:41.643283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.643302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.652008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7da8 00:26:21.566 [2024-10-15 13:06:41.652780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.652799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.661365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f5be8 00:26:21.566 [2024-10-15 13:06:41.662248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.662267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.670781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f2d80 00:26:21.566 [2024-10-15 13:06:41.671788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.671810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.679848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eb328 00:26:21.566 [2024-10-15 13:06:41.680871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.680889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.688394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e4140 00:26:21.566 [2024-10-15 13:06:41.689287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.689306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.697097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f4298 00:26:21.566 [2024-10-15 13:06:41.697530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.697549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.706447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9b30 00:26:21.566 [2024-10-15 13:06:41.706998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.707017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.715914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fa3a0 00:26:21.566 [2024-10-15 13:06:41.716578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.716598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.725037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f1868 00:26:21.566 [2024-10-15 13:06:41.725941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.734406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166efae0 00:26:21.566 [2024-10-15 13:06:41.735543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.735562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.741955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f35f0 00:26:21.566 [2024-10-15 13:06:41.742504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.742523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.751170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f5378 00:26:21.566 [2024-10-15 13:06:41.751949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.751968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.759575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f46d0 00:26:21.566 [2024-10-15 13:06:41.760208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.769778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f20d8 00:26:21.566 [2024-10-15 13:06:41.770675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.770693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.780297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5220 00:26:21.566 [2024-10-15 13:06:41.781743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.786648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f4b08 00:26:21.566 [2024-10-15 13:06:41.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.787341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.795768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e1b48 00:26:21.566 [2024-10-15 13:06:41.796320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.796339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.805006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ec840 00:26:21.566 [2024-10-15 13:06:41.805779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.805798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.813407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5ec8 00:26:21.566 [2024-10-15 13:06:41.814031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.814049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.823747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e88f8 00:26:21.566 [2024-10-15 13:06:41.824535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.824554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.832257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5ec8 00:26:21.566 [2024-10-15 13:06:41.832889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.832908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.841041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eea00 00:26:21.566 [2024-10-15 13:06:41.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.841628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.851391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ea680 00:26:21.566 [2024-10-15 13:06:41.852508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.852527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.860243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f96f8 00:26:21.566 [2024-10-15 13:06:41.861341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.861360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.566 [2024-10-15 13:06:41.868543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f4b08 00:26:21.566 [2024-10-15 13:06:41.869214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.566 [2024-10-15 13:06:41.869233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.567 [2024-10-15 13:06:41.877537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f20d8 00:26:21.567 [2024-10-15 13:06:41.878206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.567 [2024-10-15 13:06:41.878225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:21.567 [2024-10-15 13:06:41.885875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f96f8 00:26:21.567 [2024-10-15 13:06:41.886629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.567 [2024-10-15 13:06:41.886648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.896865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e6b70 00:26:21.826 [2024-10-15 13:06:41.897904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.897923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.904001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166dece0 00:26:21.826 [2024-10-15 13:06:41.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.904572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.913697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f1430 00:26:21.826 [2024-10-15 13:06:41.914491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.914509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.924690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ec408 00:26:21.826 [2024-10-15 13:06:41.925977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.925996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.931164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f2510 00:26:21.826 [2024-10-15 13:06:41.931751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.931769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.940836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f20d8 00:26:21.826 [2024-10-15 13:06:41.941552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.826 [2024-10-15 13:06:41.941572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:21.826 [2024-10-15 13:06:41.951346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5220 00:26:21.826 [2024-10-15 13:06:41.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:41.960663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f4f40 00:26:21.827 [2024-10-15 13:06:41.961578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.961597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:41.968980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fa7d8 00:26:21.827 [2024-10-15 13:06:41.969911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.969930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:41.978338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f8a50 00:26:21.827 [2024-10-15 13:06:41.979401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.979421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:41.987228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166df118 00:26:21.827 [2024-10-15 13:06:41.988052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.988071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:41.996257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eff18 00:26:21.827 [2024-10-15 13:06:41.996945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:41.996965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.005476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f8618 00:26:21.827 [2024-10-15 13:06:42.006409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.006428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.013776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f8a50 00:26:21.827 [2024-10-15 13:06:42.014822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.014841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.022881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f57b0 00:26:21.827 [2024-10-15 13:06:42.023832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.023851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.033915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f0ff8 00:26:21.827 [2024-10-15 13:06:42.035347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.035365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.040315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e6738 00:26:21.827 [2024-10-15 13:06:42.041054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.050765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fe720 00:26:21.827 [2024-10-15 13:06:42.051745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.051763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.060171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fe2e8 00:26:21.827 [2024-10-15 13:06:42.061371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.061390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.069535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f6458 00:26:21.827 [2024-10-15 13:06:42.070856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.070875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.079155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e1710 00:26:21.827 [2024-10-15 13:06:42.080587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.080610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.086929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f46d0 00:26:21.827 [2024-10-15 13:06:42.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.087945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.096456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e9e10 00:26:21.827 [2024-10-15 13:06:42.097720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.097739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.105016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e8088 00:26:21.827 [2024-10-15 13:06:42.105970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.114129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fda78 00:26:21.827 [2024-10-15 13:06:42.115164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.115183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.125214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5a90 00:26:21.827 [2024-10-15 13:06:42.126688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.126706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.131567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eee38 00:26:21.827 [2024-10-15 13:06:42.132214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.132233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:21.827 [2024-10-15 13:06:42.140614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e3060 00:26:21.827 [2024-10-15 13:06:42.141254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.827 [2024-10-15 13:06:42.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.149621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ee5c8 00:26:22.087 [2024-10-15 13:06:42.150268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.150287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.159141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e8088 00:26:22.087 [2024-10-15 13:06:42.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.160051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.167979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f0ff8 00:26:22.087 [2024-10-15 13:06:42.168632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.168652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.179020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eee38 00:26:22.087 [2024-10-15 13:06:42.180372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.180391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.188410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e8088 00:26:22.087 [2024-10-15 13:06:42.189905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.189923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.194833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f0ff8 00:26:22.087 [2024-10-15 13:06:42.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.195634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.087 [2024-10-15 13:06:42.205909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fc128 00:26:22.087 [2024-10-15 13:06:42.207211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.087 [2024-10-15 13:06:42.207229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.212358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eb328 00:26:22.088 [2024-10-15 13:06:42.212928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.212947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.223296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9b30 00:26:22.088 [2024-10-15 13:06:42.224349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.224371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.231723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9f68 00:26:22.088 [2024-10-15 13:06:42.232510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.232529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.240671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ec840 00:26:22.088 [2024-10-15 13:06:42.241503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.241521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.249744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e3498 00:26:22.088 [2024-10-15 13:06:42.250579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.250597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.258827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f9b30 00:26:22.088 [2024-10-15 13:06:42.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.259561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.269744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f92c0 00:26:22.088 [2024-10-15 13:06:42.271173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.271192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.276106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eaef0 00:26:22.088 [2024-10-15 13:06:42.276754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.276773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.287689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e6fa8 00:26:22.088 [2024-10-15 13:06:42.289122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.289140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.294079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ff3c8 00:26:22.088 [2024-10-15 13:06:42.294810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.294829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.303211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.088 [2024-10-15 13:06:42.303945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.303963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:22.088 27996.00 IOPS, 109.36 MiB/s [2024-10-15T11:06:42.407Z] [2024-10-15 13:06:42.315673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ecc78 00:26:22.088 [2024-10-15 13:06:42.317017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.317035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.322104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f5be8 00:26:22.088 [2024-10-15 13:06:42.322728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.322747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.332657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fb480 00:26:22.088 [2024-10-15 13:06:42.333513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.333532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.341511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fdeb0 00:26:22.088 [2024-10-15 13:06:42.342508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.342527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.350670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fdeb0 00:26:22.088 [2024-10-15 13:06:42.351549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.351567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.360149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166de470 00:26:22.088 [2024-10-15 13:06:42.361192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.361210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.369653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f6cc8 00:26:22.088 [2024-10-15 13:06:42.370814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.370833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.377310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e12d8 00:26:22.088 [2024-10-15 13:06:42.377809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.377832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.386690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e6300 00:26:22.088 [2024-10-15 13:06:42.387301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.387322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.395718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e01f8 00:26:22.088 [2024-10-15 13:06:42.396659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.088 [2024-10-15 13:06:42.404267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fcdd0 00:26:22.088 [2024-10-15 13:06:42.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.088 [2024-10-15 13:06:42.405251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.413947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eee38 00:26:22.348 [2024-10-15 13:06:42.415025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.415044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.423132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fb8b8 00:26:22.348 [2024-10-15 13:06:42.424174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.424192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.432365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e3498 00:26:22.348 [2024-10-15 13:06:42.433408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.433427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.440700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166de470 00:26:22.348 [2024-10-15 13:06:42.441613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.449962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e3d08 00:26:22.348 [2024-10-15 13:06:42.450882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.450901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.458990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e9e10 00:26:22.348 [2024-10-15 13:06:42.459944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.459962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.467931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7da8 00:26:22.348 [2024-10-15 13:06:42.468866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.468885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.476943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ef270 00:26:22.348 [2024-10-15 13:06:42.477902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.477920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.485887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e4578 00:26:22.348 [2024-10-15 13:06:42.486827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.486846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.494840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f2948 00:26:22.348 [2024-10-15 13:06:42.495781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.495800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.503873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f4298 00:26:22.348 [2024-10-15 13:06:42.504810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.504828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.348 [2024-10-15 13:06:42.512858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f0350 00:26:22.348 [2024-10-15 13:06:42.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.348 [2024-10-15 13:06:42.513819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.521857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e8088 00:26:22.349 [2024-10-15 13:06:42.522820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.522840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.530993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e5658 00:26:22.349 [2024-10-15 13:06:42.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.531968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.539935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166dfdc0 00:26:22.349 [2024-10-15 13:06:42.540869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.540888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.548893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fcdd0 00:26:22.349 [2024-10-15 13:06:42.549846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.549865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.558107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166efae0 00:26:22.349 [2024-10-15 13:06:42.558855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.568454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7970 00:26:22.349 [2024-10-15 13:06:42.569966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.569985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.574821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166eaab8 00:26:22.349 [2024-10-15 13:06:42.575528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.575547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.583715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f3e60 00:26:22.349 [2024-10-15 13:06:42.584491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.584509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.593104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166ed0b0 00:26:22.349 [2024-10-15 13:06:42.594022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.594041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.602769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fb480 00:26:22.349 [2024-10-15 13:06:42.603808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.603826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.611260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166e0a68 00:26:22.349 [2024-10-15 13:06:42.611982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.612003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.621383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166feb58 00:26:22.349 [2024-10-15 13:06:42.622520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.622539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.629726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f3e60 00:26:22.349 [2024-10-15 13:06:42.630514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.630532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.638521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f1ca0 00:26:22.349 [2024-10-15 13:06:42.639360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.639378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.647522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fc560 00:26:22.349 [2024-10-15 13:06:42.648365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.648383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.656484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fe2e8 00:26:22.349 [2024-10-15 13:06:42.657321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.657339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.349 [2024-10-15 13:06:42.665784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f2948 00:26:22.349 [2024-10-15 13:06:42.666390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.349 [2024-10-15 13:06:42.666409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.676755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f20d8 00:26:22.609 [2024-10-15 13:06:42.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.678281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.683093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166fa3a0 00:26:22.609 [2024-10-15 13:06:42.683781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.683800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.693049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.693200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.702449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.702578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.702595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.711811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.711940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.711958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.721232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.721379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.730599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.730733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.730751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.739947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.740078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.740095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.749340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.749470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.749487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.758751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.758879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.758897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.768096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.768225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.768242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.777593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.777730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.777748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.786949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.787078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.609 [2024-10-15 13:06:42.787095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.609 [2024-10-15 13:06:42.796318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.609 [2024-10-15 13:06:42.796448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.796467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.805727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.805870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.805886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.815265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.815410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.815428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.824805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.824938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.824955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.834307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.834437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.834455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.843735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.843867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.853395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.853546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.862911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.863041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.863058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.872412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.872541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.872558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.881839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.881967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.881984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.891173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.891302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.891318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.900568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.900707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.900725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.909997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.910126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.910144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.919390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.919520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.919537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.610 [2024-10-15 13:06:42.928843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.610 [2024-10-15 13:06:42.928971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.610 [2024-10-15 13:06:42.928988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.938373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.938504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.938525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.947836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.947964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.947982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.957212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.957341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.957357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.966635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.966766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.966784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.976043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.976174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.976191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.985478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.985609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.985627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:42.994995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:42.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:42.995146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.004443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.004570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:43.004586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.013799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.013928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:43.013945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.023205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.023337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:43.023355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.032582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.032720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:43.032737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.042105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.870 [2024-10-15 13:06:43.042262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.870 [2024-10-15 13:06:43.051500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.870 [2024-10-15 13:06:43.051636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.051654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.060906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.061036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.061053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.070237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.070364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.070382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.079849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.079981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.079999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.089185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.089317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.089334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.098537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.098677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.098710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.108160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.108288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.108323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.117753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.117885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.117903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.127224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.127353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.127370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.136583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.136720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.136738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.145911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.146040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.146057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.155290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.155417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.155434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.164680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.164810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.174042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.174170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.871 [2024-10-15 13:06:43.183447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:22.871 [2024-10-15 13:06:43.183574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.871 [2024-10-15 13:06:43.183594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.130 [2024-10-15 13:06:43.192950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.130 [2024-10-15 13:06:43.193083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.130 [2024-10-15 13:06:43.193101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.130 [2024-10-15 13:06:43.202423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.202551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.202568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.211821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.211950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.211966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.221217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.221363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.230582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.230719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.230737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.239959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.240088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.240105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.249279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.249409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.249426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.258677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.258826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.258854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.268037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.268173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.268190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.277425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.277553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.277570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.286828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.286974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.296167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.296296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.296313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.305566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.305701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 [2024-10-15 13:06:43.314961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24195c0) with pdu=0x2000166f7538 00:26:23.131 [2024-10-15 13:06:43.315773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.131 [2024-10-15 13:06:43.315793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.131 27785.00 IOPS, 108.54 MiB/s 00:26:23.131 Latency(us) 00:26:23.131 [2024-10-15T11:06:43.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.131 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.131 nvme0n1 : 2.00 27781.39 108.52 0.00 0.00 4599.67 2231.34 12483.05 00:26:23.131 [2024-10-15T11:06:43.450Z] =================================================================================================================== 00:26:23.131 [2024-10-15T11:06:43.450Z] Total : 27781.39 108.52 0.00 0.00 4599.67 2231.34 12483.05 00:26:23.131 { 00:26:23.131 "results": [ 00:26:23.131 { 00:26:23.131 "job": "nvme0n1", 00:26:23.131 "core_mask": "0x2", 00:26:23.131 "workload": "randwrite", 00:26:23.131 "status": "finished", 00:26:23.131 "queue_depth": 128, 00:26:23.131 "io_size": 4096, 00:26:23.131 "runtime": 2.004579, 00:26:23.131 "iops": 27781.394497298435, 00:26:23.131 "mibps": 108.52107225507201, 00:26:23.131 "io_failed": 0, 00:26:23.131 "io_timeout": 0, 00:26:23.131 "avg_latency_us": 4599.6739638645895, 00:26:23.131 "min_latency_us": 2231.344761904762, 00:26:23.131 "max_latency_us": 12483.047619047618 00:26:23.131 } 00:26:23.131 ], 00:26:23.131 "core_count": 1 00:26:23.131 } 00:26:23.131 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.131 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.131 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.131 | .driver_specific 00:26:23.131 | .nvme_error 00:26:23.131 | .status_code 00:26:23.131 | .command_transient_transport_error' 00:26:23.131 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1361458 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1361458 ']' 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1361458 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361458 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361458' 00:26:23.390 killing process with pid 1361458 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1361458 00:26:23.390 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.390 00:26:23.390 Latency(us) 00:26:23.390 [2024-10-15T11:06:43.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.390 [2024-10-15T11:06:43.709Z] =================================================================================================================== 00:26:23.390 [2024-10-15T11:06:43.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.390 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1361458 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1362149 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1362149 /var/tmp/bperf.sock 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1362149 ']' 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:23.649 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.649 [2024-10-15 13:06:43.793835] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:23.649 [2024-10-15 13:06:43.793882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362149 ] 00:26:23.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.649 Zero copy mechanism will not be used. 00:26:23.649 [2024-10-15 13:06:43.862457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.649 [2024-10-15 13:06:43.904265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.908 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.908 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:23.908 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.908 13:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.908 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.476 nvme0n1 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.476 13:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.476 Zero copy mechanism will not be used. 00:26:24.476 Running I/O for 2 seconds... 00:26:24.476 [2024-10-15 13:06:44.720431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.476 [2024-10-15 13:06:44.720681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.476 [2024-10-15 13:06:44.720709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.725558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.725808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.725833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.730470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.730718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.730740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.735167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.735404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.735426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.740322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.740565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.740586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.745066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.745301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.745322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.749671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.749917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.749937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.754121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.754364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.754385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.758615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.758852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.758873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.763163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.763415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.767811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.768067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.768093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.772392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.772644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.772665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.776852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.777110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.781252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.781485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.781505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.786000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.786239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.786262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.790523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.790765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.790786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.477 [2024-10-15 13:06:44.795377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.477 [2024-10-15 13:06:44.795620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.477 [2024-10-15 13:06:44.795641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.800339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.800590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.774 [2024-10-15 13:06:44.800619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.805926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.806178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.774 [2024-10-15 13:06:44.806200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.811071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.811309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.774 [2024-10-15 13:06:44.811331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.816139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.816386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.774 [2024-10-15 13:06:44.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.821503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.821742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.774 [2024-10-15 13:06:44.821765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.774 [2024-10-15 13:06:44.826539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.774 [2024-10-15 13:06:44.826781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.826802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.831315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.831548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.831569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.836240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.836476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.836497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.841942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.842195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.847101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.847343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.847365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.852138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.852201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.852222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.857954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.858188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.858210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.864112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.864349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.870563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.870805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.870827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.878171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.878405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.884779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.885012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.885033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.890767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.891011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.891032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.897415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.897662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.897683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.902865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.903101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.903122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.908142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.908379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.908400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.912920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.913162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.913183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.917540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.917815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.922278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.922510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.922531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.926940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.927178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.927198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.931731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.931967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.931989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.936236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.936468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.936489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.940909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.941156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.945660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.945896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.945917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.950156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.950403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.950425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.954508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.954760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.954781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.958896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.959142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.959164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.963374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.963614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.963635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.967702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.967936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.967958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.775 [2024-10-15 13:06:44.972060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.775 [2024-10-15 13:06:44.972294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.775 [2024-10-15 13:06:44.972316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.976445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.976700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.976722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.980802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.981043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.981065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.985187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.985441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.985467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.989567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.989813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.989834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.993969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.994221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.994241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:44.998351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:44.998589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:44.998618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.002723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.002962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.007214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.007487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.011690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.011927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.011948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.016299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.016532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.016553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.022242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.022476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.022498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.029003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.029243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.029264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.035967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.036191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.036212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.043271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.043422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.043441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.051070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.051232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.051252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.058891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.059134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.059155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.066448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.066704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.066727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.776 [2024-10-15 13:06:45.072118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:24.776 [2024-10-15 13:06:45.072355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.776 [2024-10-15 13:06:45.072378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.077732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.077974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.077997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.082748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.082989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.083010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.087838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.088112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.092745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.092995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.093017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.097903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.097957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.097975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.103541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.103807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.103830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.081 [2024-10-15 13:06:45.109027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.081 [2024-10-15 13:06:45.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-10-15 13:06:45.109309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.113907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.114175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.114197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.119078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.119325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.119347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.124435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.124702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.124724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.129876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.130136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.130163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.135643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.135911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.135933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.141635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.141937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.146782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.146898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.152377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.152633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.158165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.158412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.158433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.163858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.164093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.164116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.170505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.170808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.176355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.176414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.176433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.183752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.184025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.190050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.190117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.196104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.196327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.196348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.200818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.201038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.201059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.205884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.206135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.206156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.211917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.212221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.212241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.217227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.217456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.222192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.222410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.222429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.227160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.227399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.227419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.232135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.232369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.237419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.237641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.237662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.242440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.242683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.082 [2024-10-15 13:06:45.246724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.082 [2024-10-15 13:06:45.246941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-10-15 13:06:45.246962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.251033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.251257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.251278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.255317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.255539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.255560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.259973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.260211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.265868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.266154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.266175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.272457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.272748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.272773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.278995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.279267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.279288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.285065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.285360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.285381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.291291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.291577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.291598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.297415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.297702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.297723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.303973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.304274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.304295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.310206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.310507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.310527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.316574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.316846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.322709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.322958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.322979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.328889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.329194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.329215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.334944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.335195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.335216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.341155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.341418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.341440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.347050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.347267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.347287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.351721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.351940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.351961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.356714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.356939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.356961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.361539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.361783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.361803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.366348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.366570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.366591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.370946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.371164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.371189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.375614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.375833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.375854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.381483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.381770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.381791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.387003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.387223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.387243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.391945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.392164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.392185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-10-15 13:06:45.396769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.083 [2024-10-15 13:06:45.396986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-10-15 13:06:45.397006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.401558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.401790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.406365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.406583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.406610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.411164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.411382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.411401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.415821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.416040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.416061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.420611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.420851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.425687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.425906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.425926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.431216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.431435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.431455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.438345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.438675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.445014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.445246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.445267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.451323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.451586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.451612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.457389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.457656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.457678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.464133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.464481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.471140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.471428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.471448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.478548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.478850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.478881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.485732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.486012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.486032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.493240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.493531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.493551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.500853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.501206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.501228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.507883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.508215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.508236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.514803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.515100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.515121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.521877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.344 [2024-10-15 13:06:45.522176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.344 [2024-10-15 13:06:45.522197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.344 [2024-10-15 13:06:45.528686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.528963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.528988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.535897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.536182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.536203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.543905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.544186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.544206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.551078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.551308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.551328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.557573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.557821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.564008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.564294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.570872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.571107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.571127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.576734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.577015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.577036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.583474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.583769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.583790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.589203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.589412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.589433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.594186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.594393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.594413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.599689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.599895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.599915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.605570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.605789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.605810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.610924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.611130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.615774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.615982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.616002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.620492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.620724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.625766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.625993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.630422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.630634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.630654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.635442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.635656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.635676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.640417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.640630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.640650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.645021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.645248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.649509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.649737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.649758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.654077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.654282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.654302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.658512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.658720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.658739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.345 [2024-10-15 13:06:45.663851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.345 [2024-10-15 13:06:45.664061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.345 [2024-10-15 13:06:45.664082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.668535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.668750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.668771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.673629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.673837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.673861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.678067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.678274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.678294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.682493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.682705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.682725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.686666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.686872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.686892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.690960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.691164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.691185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.695247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.695452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.695472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.699651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.699857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.699878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.703897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.704102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.606 [2024-10-15 13:06:45.704123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.606 [2024-10-15 13:06:45.708019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.606 [2024-10-15 13:06:45.708223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.708243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.712286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.712493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.712513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 5725.00 IOPS, 715.62 MiB/s [2024-10-15T11:06:45.926Z] [2024-10-15 13:06:45.717691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.717896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.717916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.722560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.722771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.722790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.727027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.727232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.727253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.731428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.731656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.731677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.735903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.736113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.740274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.740482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.744695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.749143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.749350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.749375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.753518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.753735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.753756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.758029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.758233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.758254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.762447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.762658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.762676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.766597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.766812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.766831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.770753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.770958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.770979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.774917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.775143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.779082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.779289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.779309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.783225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.783431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.783452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.787326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.787538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.787559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.791443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.791655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.791676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.795523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.795737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.795757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.799627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.799834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.799856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.803728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.803936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.803954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.807837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.808065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.811944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.812149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.812169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.816019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.816226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.816246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.820150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.820377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.824234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.824441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.824462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.607 [2024-10-15 13:06:45.828311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.607 [2024-10-15 13:06:45.828520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.607 [2024-10-15 13:06:45.828540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.832441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.832654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.832673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.836535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.836745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.836765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.840613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.840837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.840857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.844751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.844959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.844979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.848852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.849056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.849084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.852945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.853150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.853170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.857104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.857309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.857335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.861869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.862168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.862188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.867671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.867966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.867986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.873797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.874111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.874131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.879862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.880160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.880180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.886141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.886427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.892207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.892493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.892513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.898151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.898438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.898459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.904011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.904264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.904284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.910040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.910328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.910349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.916093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.916394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.916415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.608 [2024-10-15 13:06:45.922507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.608 [2024-10-15 13:06:45.922789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.608 [2024-10-15 13:06:45.922810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.928563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.928863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.928885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.935136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.935428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.935448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.941529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.941780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.941801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.947719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.947999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.948019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.953583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.953818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.958718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.958924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.958944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.962935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.963138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.963158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.967147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.967353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.971280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.971483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.971510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.975370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.975583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.975609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.979505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.979716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.979736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.983671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.983892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.983911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.987846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.988056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.988077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.992003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.992216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.992236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:45.996185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:45.996397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:45.996422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.000354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.000588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.004489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.004709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.004730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.008640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.008861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.008881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.012787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.016925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.017138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.021077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.021284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.021305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.025141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.025346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.025367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.029188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.029395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.029415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.033271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.033478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-10-15 13:06:46.033499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.955 [2024-10-15 13:06:46.037341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.955 [2024-10-15 13:06:46.037550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.037571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.041427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.041638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.045504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.045715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.045735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.049687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.049894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.049915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.054082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.054290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.054310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.058989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.059287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.059307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.063926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.064135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.064155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.068447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.068660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.068684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.072817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.073025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.073046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.077456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.077669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.081853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.082061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.082082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.086158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.086370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.086390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.090645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.090853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.090873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.095199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.095407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.095427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.100347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.100556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.105328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.105534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.105555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.110386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.110598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.110624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.115164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.115371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.115392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.120041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.120274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.125187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.125394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.125415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.129897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.130101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.130122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.134986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.135220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.139747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.139951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.144608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.144814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.144834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.149435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.149651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.149672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.154686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.154892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.154912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.159162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.159389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.163670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.163878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.163897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.167811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.168024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-10-15 13:06:46.168045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.956 [2024-10-15 13:06:46.172134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.956 [2024-10-15 13:06:46.172351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.172372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.176643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.176855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.176877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.181316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.181537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.181555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.185791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.186004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.190243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.190456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.190480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.194784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.194990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.195011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.199523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.199777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.203768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.204004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.208005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.208220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.208241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.212218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.212426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.212447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.216486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.216706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.216727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.220728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.220937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.220959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.224853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.225064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.225085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.229069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.229280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.229303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.233366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.233579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.233605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.238335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.238550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.238571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.243390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.243631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.243653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.248064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.248273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.252635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.252848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.252869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.257238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.257451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.257471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.261537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.261765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.261786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.265857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.266074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.266094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.957 [2024-10-15 13:06:46.270361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:25.957 [2024-10-15 13:06:46.270569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-10-15 13:06:46.270588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.275331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.275541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.275562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.280074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.280287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.280307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.284799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.285006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.285026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.289210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.289418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.289439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.294317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.294525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.294546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.298989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.299188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.299208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.303200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.303409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.303429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.307415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.307629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.311652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.311864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.311885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.315852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.316059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.316079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.320095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.320305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.320326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.324644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.328919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.329129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.329149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.333087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.333297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.333317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.337214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.337421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.337442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.341459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.341673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.341694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.218 [2024-10-15 13:06:46.345636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.218 [2024-10-15 13:06:46.345849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.218 [2024-10-15 13:06:46.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.349829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.350039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.353962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.354169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.354188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.358115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.358324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.362256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.362489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.366475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.366702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.366722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.371480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.371782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.371803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.377449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.377714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.382412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.382628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.382648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.387145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.387356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.387377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.391997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.392205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.392226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.396642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.396852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.396873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.402299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.402627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.402647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.408254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.408567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.408587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.414532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.414796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.414817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.420511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.420791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.420812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.426431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.426749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.426771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.432920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.433194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.433219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.438655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.438936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.444225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.444487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.444508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.449820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.450068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.450089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.455356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.455565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.455585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.460693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.460989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.461010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.466023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.466349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.471244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.471530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.471550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.475668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.475891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.475911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.479609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.479830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.479849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.483712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.483935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.483956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.487866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.488088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.219 [2024-10-15 13:06:46.488108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.219 [2024-10-15 13:06:46.491973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.219 [2024-10-15 13:06:46.492168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.492189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.496025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.496268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.500143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.500334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.500356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.504216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.504423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.504443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.508226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.508433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.508454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.512188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.512364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.512387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.516148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.516354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.516375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.521474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.521756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.521779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.526376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.526569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.526588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.530441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.530630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.530651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.534574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.534759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.534779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.220 [2024-10-15 13:06:46.538670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.220 [2024-10-15 13:06:46.538845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.220 [2024-10-15 13:06:46.538866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.480 [2024-10-15 13:06:46.542701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.480 [2024-10-15 13:06:46.542880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.480 [2024-10-15 13:06:46.542901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.480 [2024-10-15 13:06:46.546653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.480 [2024-10-15 13:06:46.546833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.480 [2024-10-15 13:06:46.546852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.480 [2024-10-15 13:06:46.550617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.550815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.554579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.554788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.558934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.559181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.563383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.563559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.563578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.567475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.567688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.572516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.572813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.577303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.577502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.581440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.581634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.581655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.585545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.589498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.589728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.589749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.593558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.593761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.597498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.597690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.597711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.601525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.601750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.601770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.605506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.605699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.605720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.609502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.609730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.609751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.613494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.613687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.617582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.617770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.617791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.621815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.622044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.622069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.625947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.626161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.626182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.630032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.630230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.634258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.634476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.634497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.638369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.638544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.638563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.642490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.642673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.647515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.647791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.653099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.653374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.659239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.659433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.659453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.663669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.663847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.663866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.481 [2024-10-15 13:06:46.667748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.481 [2024-10-15 13:06:46.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.481 [2024-10-15 13:06:46.667971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.671982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.672146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.672165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.675937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.676113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.676133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.680076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.680268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.680288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.685182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.685373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.685392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.689977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.690159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.690179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.694058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.694284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.694305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.698080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.698311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.698331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.702097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.702262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.702282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.706150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.706340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.706360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.710173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.710343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.710363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.482 [2024-10-15 13:06:46.714125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.714345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.714365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.482 6258.50 IOPS, 782.31 MiB/s [2024-10-15T11:06:46.801Z] [2024-10-15 13:06:46.719744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2419900) with pdu=0x2000166fef90 00:26:26.482 [2024-10-15 13:06:46.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.482 [2024-10-15 13:06:46.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.482 00:26:26.482 Latency(us) 00:26:26.482 [2024-10-15T11:06:46.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.482 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.482 nvme0n1 : 2.00 6255.55 781.94 0.00 0.00 2553.45 1825.65 7957.94 00:26:26.482 [2024-10-15T11:06:46.801Z] =================================================================================================================== 00:26:26.482 [2024-10-15T11:06:46.801Z] Total : 6255.55 781.94 0.00 0.00 2553.45 1825.65 7957.94 00:26:26.482 { 00:26:26.482 "results": [ 00:26:26.482 { 00:26:26.482 "job": "nvme0n1", 00:26:26.482 "core_mask": "0x2", 00:26:26.482 "workload": "randwrite", 00:26:26.482 "status": "finished", 00:26:26.482 "queue_depth": 16, 00:26:26.482 "io_size": 131072, 00:26:26.482 "runtime": 2.0035, 00:26:26.482 "iops": 6255.552782630397, 00:26:26.482 "mibps": 781.9440978287996, 00:26:26.482 "io_failed": 0, 00:26:26.482 "io_timeout": 0, 00:26:26.482 "avg_latency_us": 2553.4529954824025, 00:26:26.482 "min_latency_us": 1825.6457142857143, 00:26:26.482 "max_latency_us": 7957.942857142857 00:26:26.482 } 00:26:26.482 ], 00:26:26.482 "core_count": 1 00:26:26.482 } 00:26:26.482 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.482 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.482 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.482 | .driver_specific 00:26:26.482 | .nvme_error 00:26:26.482 | .status_code 00:26:26.482 | .command_transient_transport_error' 00:26:26.482 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 404 > 0 )) 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1362149 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1362149 ']' 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1362149 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1362149 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1362149' 00:26:26.741 killing process with pid 1362149 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1362149 00:26:26.741 Received shutdown signal, test time was about 2.000000 seconds 00:26:26.741 00:26:26.741 Latency(us) 00:26:26.741 [2024-10-15T11:06:47.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.741 [2024-10-15T11:06:47.060Z] =================================================================================================================== 00:26:26.741 [2024-10-15T11:06:47.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.741 13:06:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1362149 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1360297 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1360297 ']' 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1360297 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360297 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360297' 00:26:27.000 killing process with pid 1360297 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1360297 00:26:27.000 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1360297 00:26:27.260 00:26:27.260 real 0m14.025s 00:26:27.260 user 0m26.769s 00:26:27.260 sys 0m4.641s 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.260 ************************************ 00:26:27.260 END TEST nvmf_digest_error 00:26:27.260 ************************************ 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.260 rmmod nvme_tcp 00:26:27.260 rmmod nvme_fabrics 00:26:27.260 rmmod nvme_keyring 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1360297 ']' 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1360297 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1360297 ']' 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1360297 00:26:27.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1360297) - No such process 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1360297 is not found' 00:26:27.260 Process with pid 1360297 is not found 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.260 13:06:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.794 00:26:29.794 real 0m36.192s 00:26:29.794 user 0m54.880s 00:26:29.794 sys 0m13.690s 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.794 ************************************ 00:26:29.794 END TEST nvmf_digest 00:26:29.794 ************************************ 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.794 ************************************ 00:26:29.794 START TEST nvmf_bdevperf 00:26:29.794 ************************************ 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.794 * Looking for test storage... 00:26:29.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.794 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:29.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.795 --rc genhtml_branch_coverage=1 00:26:29.795 --rc genhtml_function_coverage=1 00:26:29.795 --rc genhtml_legend=1 00:26:29.795 --rc geninfo_all_blocks=1 00:26:29.795 --rc geninfo_unexecuted_blocks=1 00:26:29.795 00:26:29.795 ' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:29.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.795 --rc genhtml_branch_coverage=1 00:26:29.795 --rc genhtml_function_coverage=1 00:26:29.795 --rc genhtml_legend=1 00:26:29.795 --rc geninfo_all_blocks=1 00:26:29.795 --rc geninfo_unexecuted_blocks=1 00:26:29.795 00:26:29.795 ' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:29.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.795 --rc genhtml_branch_coverage=1 00:26:29.795 --rc genhtml_function_coverage=1 00:26:29.795 --rc genhtml_legend=1 00:26:29.795 --rc geninfo_all_blocks=1 00:26:29.795 --rc geninfo_unexecuted_blocks=1 00:26:29.795 00:26:29.795 ' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:29.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.795 --rc genhtml_branch_coverage=1 00:26:29.795 --rc genhtml_function_coverage=1 00:26:29.795 --rc genhtml_legend=1 00:26:29.795 --rc geninfo_all_blocks=1 00:26:29.795 --rc geninfo_unexecuted_blocks=1 00:26:29.795 00:26:29.795 ' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.795 13:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:36.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:36.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:36.366 Found net devices under 0000:86:00.0: cvl_0_0 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.366 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:36.367 Found net devices under 0000:86:00.1: cvl_0_1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:26:36.367 00:26:36.367 --- 10.0.0.2 ping statistics --- 00:26:36.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.367 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:26:36.367 00:26:36.367 --- 10.0.0.1 ping statistics --- 00:26:36.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.367 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1366159 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1366159 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1366159 ']' 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 [2024-10-15 13:06:55.783286] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:36.367 [2024-10-15 13:06:55.783341] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.367 [2024-10-15 13:06:55.857161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.367 [2024-10-15 13:06:55.901861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.367 [2024-10-15 13:06:55.901897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.367 [2024-10-15 13:06:55.901904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.367 [2024-10-15 13:06:55.901910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.367 [2024-10-15 13:06:55.901915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.367 [2024-10-15 13:06:55.903337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.367 [2024-10-15 13:06:55.903442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.367 [2024-10-15 13:06:55.903443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.367 13:06:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 [2024-10-15 13:06:56.039713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 Malloc0 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 [2024-10-15 13:06:56.110564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.367 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.367 { 00:26:36.367 "params": { 00:26:36.368 "name": "Nvme$subsystem", 00:26:36.368 "trtype": "$TEST_TRANSPORT", 00:26:36.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.368 "adrfam": "ipv4", 00:26:36.368 "trsvcid": "$NVMF_PORT", 00:26:36.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.368 "hdgst": ${hdgst:-false}, 00:26:36.368 "ddgst": ${ddgst:-false} 00:26:36.368 }, 00:26:36.368 "method": "bdev_nvme_attach_controller" 00:26:36.368 } 00:26:36.368 EOF 00:26:36.368 )") 00:26:36.368 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:36.368 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:36.368 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:36.368 13:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:36.368 "params": { 00:26:36.368 "name": "Nvme1", 00:26:36.368 "trtype": "tcp", 00:26:36.368 "traddr": "10.0.0.2", 00:26:36.368 "adrfam": "ipv4", 00:26:36.368 "trsvcid": "4420", 00:26:36.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.368 "hdgst": false, 00:26:36.368 "ddgst": false 00:26:36.368 }, 00:26:36.368 "method": "bdev_nvme_attach_controller" 00:26:36.368 }' 00:26:36.368 [2024-10-15 13:06:56.161623] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:36.368 [2024-10-15 13:06:56.161662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366191 ] 00:26:36.368 [2024-10-15 13:06:56.232081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.368 [2024-10-15 13:06:56.275658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.368 Running I/O for 1 seconds... 00:26:37.303 11350.00 IOPS, 44.34 MiB/s 00:26:37.303 Latency(us) 00:26:37.303 [2024-10-15T11:06:57.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.303 Verification LBA range: start 0x0 length 0x4000 00:26:37.303 Nvme1n1 : 1.01 11396.42 44.52 0.00 0.00 11190.37 2200.14 11172.33 00:26:37.303 [2024-10-15T11:06:57.622Z] =================================================================================================================== 00:26:37.303 [2024-10-15T11:06:57.622Z] Total : 11396.42 44.52 0.00 0.00 11190.37 2200.14 11172.33 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1366418 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:37.303 { 00:26:37.303 "params": { 00:26:37.303 "name": "Nvme$subsystem", 00:26:37.303 "trtype": "$TEST_TRANSPORT", 00:26:37.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.303 "adrfam": "ipv4", 00:26:37.303 "trsvcid": "$NVMF_PORT", 00:26:37.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.303 "hdgst": ${hdgst:-false}, 00:26:37.303 "ddgst": ${ddgst:-false} 00:26:37.303 }, 00:26:37.303 "method": "bdev_nvme_attach_controller" 00:26:37.303 } 00:26:37.303 EOF 00:26:37.303 )") 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:37.303 13:06:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:37.303 "params": { 00:26:37.303 "name": "Nvme1", 00:26:37.303 "trtype": "tcp", 00:26:37.303 "traddr": "10.0.0.2", 00:26:37.303 "adrfam": "ipv4", 00:26:37.303 "trsvcid": "4420", 00:26:37.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.303 "hdgst": false, 00:26:37.303 "ddgst": false 00:26:37.303 }, 00:26:37.303 "method": "bdev_nvme_attach_controller" 00:26:37.303 }' 00:26:37.562 [2024-10-15 13:06:57.645782] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:37.562 [2024-10-15 13:06:57.645828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366418 ] 00:26:37.562 [2024-10-15 13:06:57.711695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.562 [2024-10-15 13:06:57.749349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.826 Running I/O for 15 seconds... 00:26:40.146 11430.00 IOPS, 44.65 MiB/s [2024-10-15T11:07:00.728Z] 11339.00 IOPS, 44.29 MiB/s [2024-10-15T11:07:00.728Z] 13:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1366159 00:26:40.409 13:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:40.409 [2024-10-15 13:07:00.614554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.409 [2024-10-15 13:07:00.614591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.614989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.614997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.615004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.615019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.409 [2024-10-15 13:07:00.615026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.409 [2024-10-15 13:07:00.615033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.410 [2024-10-15 13:07:00.615778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.410 [2024-10-15 13:07:00.615785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.615987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.615993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.411 [2024-10-15 13:07:00.616214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.411 [2024-10-15 13:07:00.616230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.411 [2024-10-15 13:07:00.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.411 [2024-10-15 13:07:00.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.412 [2024-10-15 13:07:00.616662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7aac20 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.616678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.412 [2024-10-15 13:07:00.616683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.412 [2024-10-15 13:07:00.616689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:26:40.412 [2024-10-15 13:07:00.616698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.412 [2024-10-15 13:07:00.616740] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7aac20 was disconnected and freed. reset controller. 00:26:40.412 [2024-10-15 13:07:00.619469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.619521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.620049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.620066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.620074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.620246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.620419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.412 [2024-10-15 13:07:00.620428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.412 [2024-10-15 13:07:00.620437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.412 [2024-10-15 13:07:00.623191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.412 [2024-10-15 13:07:00.632744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.633178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.633198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.633206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.633381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.633555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.412 [2024-10-15 13:07:00.633565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.412 [2024-10-15 13:07:00.633572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.412 [2024-10-15 13:07:00.636321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.412 [2024-10-15 13:07:00.645763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.646142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.646160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.646169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.646340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.646509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.412 [2024-10-15 13:07:00.646518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.412 [2024-10-15 13:07:00.646524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.412 [2024-10-15 13:07:00.649247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.412 [2024-10-15 13:07:00.658715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.659093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.659139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.659163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.659757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.660347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.412 [2024-10-15 13:07:00.660357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.412 [2024-10-15 13:07:00.660365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.412 [2024-10-15 13:07:00.663037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.412 [2024-10-15 13:07:00.671614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.671959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.671976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.671986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.672148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.412 [2024-10-15 13:07:00.672307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.412 [2024-10-15 13:07:00.672317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.412 [2024-10-15 13:07:00.672323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.412 [2024-10-15 13:07:00.674855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.412 [2024-10-15 13:07:00.684562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.412 [2024-10-15 13:07:00.684975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.412 [2024-10-15 13:07:00.684994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.412 [2024-10-15 13:07:00.685002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.412 [2024-10-15 13:07:00.685174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.413 [2024-10-15 13:07:00.685348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.413 [2024-10-15 13:07:00.685358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.413 [2024-10-15 13:07:00.685365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.413 [2024-10-15 13:07:00.688123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.413 [2024-10-15 13:07:00.697415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.413 [2024-10-15 13:07:00.697773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.413 [2024-10-15 13:07:00.697790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.413 [2024-10-15 13:07:00.697797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.413 [2024-10-15 13:07:00.697957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.413 [2024-10-15 13:07:00.698117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.413 [2024-10-15 13:07:00.698126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.413 [2024-10-15 13:07:00.698132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.413 [2024-10-15 13:07:00.700700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.413 [2024-10-15 13:07:00.710149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.413 [2024-10-15 13:07:00.710521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.413 [2024-10-15 13:07:00.710538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.413 [2024-10-15 13:07:00.710546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.413 [2024-10-15 13:07:00.710713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.413 [2024-10-15 13:07:00.710874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.413 [2024-10-15 13:07:00.710886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.413 [2024-10-15 13:07:00.710892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.413 [2024-10-15 13:07:00.713418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.413 [2024-10-15 13:07:00.722965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.413 [2024-10-15 13:07:00.723241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.413 [2024-10-15 13:07:00.723257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.413 [2024-10-15 13:07:00.723265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.413 [2024-10-15 13:07:00.723424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.413 [2024-10-15 13:07:00.723583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.413 [2024-10-15 13:07:00.723592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.413 [2024-10-15 13:07:00.723599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.413 [2024-10-15 13:07:00.726292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.673 [2024-10-15 13:07:00.735854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.673 [2024-10-15 13:07:00.736206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.673 [2024-10-15 13:07:00.736223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.673 [2024-10-15 13:07:00.736231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.673 [2024-10-15 13:07:00.736418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.673 [2024-10-15 13:07:00.736599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.673 [2024-10-15 13:07:00.736616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.673 [2024-10-15 13:07:00.736623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.673 [2024-10-15 13:07:00.739291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.673 [2024-10-15 13:07:00.748650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.673 [2024-10-15 13:07:00.748927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.673 [2024-10-15 13:07:00.748943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.673 [2024-10-15 13:07:00.748950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.673 [2024-10-15 13:07:00.749109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.673 [2024-10-15 13:07:00.749269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.673 [2024-10-15 13:07:00.749278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.673 [2024-10-15 13:07:00.749284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.673 [2024-10-15 13:07:00.751827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.673 [2024-10-15 13:07:00.761471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.673 [2024-10-15 13:07:00.761750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.673 [2024-10-15 13:07:00.761767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.673 [2024-10-15 13:07:00.761774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.673 [2024-10-15 13:07:00.761933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.673 [2024-10-15 13:07:00.762093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.673 [2024-10-15 13:07:00.762102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.673 [2024-10-15 13:07:00.762109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.673 [2024-10-15 13:07:00.764642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.673 [2024-10-15 13:07:00.774422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.673 [2024-10-15 13:07:00.774750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.673 [2024-10-15 13:07:00.774767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.774774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.774933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.775092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.775102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.775108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.777640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.787195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.787524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.787541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.787548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.787715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.787875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.787884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.787891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.790415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.799968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.800238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.800254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.800261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.800425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.800613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.800623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.800630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.803222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.812707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.813077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.813094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.813102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.813260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.813419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.813428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.813433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.815964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.825507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.825879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.825896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.825903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.826062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.826221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.826230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.826236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.828769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.838388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.838704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.838721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.838729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.838888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.839048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.839057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.839066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.841663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.851173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.851437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.851454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.851461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.851626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.851785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.851795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.851801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.854319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.863963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.864327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.864371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.864394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.864920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.865109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.865119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.865125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.867863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.876958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.877377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.877395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.877404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.877576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.877756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.877767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.877775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.880520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.889938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.890337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.890358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.890366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.890540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.890720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.890732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.890739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.893486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.902899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.674 [2024-10-15 13:07:00.903174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.674 [2024-10-15 13:07:00.903190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.674 [2024-10-15 13:07:00.903198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.674 [2024-10-15 13:07:00.903371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.674 [2024-10-15 13:07:00.903544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.674 [2024-10-15 13:07:00.903554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.674 [2024-10-15 13:07:00.903561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.674 [2024-10-15 13:07:00.906317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.674 [2024-10-15 13:07:00.915814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.916192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.916209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.916217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.916385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.916553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.916563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.916569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.919245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.928642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.928973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.928989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.928997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.929156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.929318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.929328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.929334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.931870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.941557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.941973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.942018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.942042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.942637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.943078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.943087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.943093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.945620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.954273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.954694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.954739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.954763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.955141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.955302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.955311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.955317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.957939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.967050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.967386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.967402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.967410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.967569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.967735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.967745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.967752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.970278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.979912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.980228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.980244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.980252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.980411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.980570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.980579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.980585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.675 [2024-10-15 13:07:00.983118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.675 [2024-10-15 13:07:00.992866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.675 [2024-10-15 13:07:00.993224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.675 [2024-10-15 13:07:00.993241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.675 [2024-10-15 13:07:00.993249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.675 [2024-10-15 13:07:00.993417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.675 [2024-10-15 13:07:00.993587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.675 [2024-10-15 13:07:00.993596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.675 [2024-10-15 13:07:00.993610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:00.996277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.005595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.005916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.005949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.005957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.006124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.006292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.006301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.006307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.008894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.018439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.018718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.018735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.018745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.018907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.019066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.019076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.019082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.021615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.031159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.031427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.031443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.031450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.031615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.031776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.031785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.031791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.034314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.044051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.044400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.044443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.044466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.045076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.045237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.045246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.045252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.047777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.056932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.057329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.057345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.057353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.057512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.057677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.057690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.057697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.060274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 [2024-10-15 13:07:01.069697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.070108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.070125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.070133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.070291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.070450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.936 [2024-10-15 13:07:01.070459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.936 [2024-10-15 13:07:01.070466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.936 [2024-10-15 13:07:01.073000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.936 9632.00 IOPS, 37.62 MiB/s [2024-10-15T11:07:01.255Z] [2024-10-15 13:07:01.082475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.936 [2024-10-15 13:07:01.082919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.936 [2024-10-15 13:07:01.082969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.936 [2024-10-15 13:07:01.082994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.936 [2024-10-15 13:07:01.083504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.936 [2024-10-15 13:07:01.083672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.083682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.083689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.086215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.095297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.095730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.095747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.095754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.095913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.096073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.096082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.096088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.098613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.108061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.108481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.108518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.108545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.109146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.109637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.109646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.109653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.112172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.120792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.121214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.121259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.121283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.121741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.121911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.121920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.121926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.124691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.133876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.134305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.134322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.134330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.134503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.134680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.134691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.134697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.137447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.146653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.147071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.147115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.147139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.147595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.147804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.147814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.147820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.150499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.159367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.159782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.159799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.159806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.159964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.160124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.160133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.160139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.162668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.172183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.172577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.172594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.172607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.172790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.172957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.172967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.172973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.175541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.184916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.185333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.185377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.185401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.185811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.185972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.185981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.185992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.188515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.197745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.198081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.198097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.198105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.198265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.198424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.198433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.198439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.200988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.210531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.210871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.210888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.210896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.937 [2024-10-15 13:07:01.211056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.937 [2024-10-15 13:07:01.211215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.937 [2024-10-15 13:07:01.211225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.937 [2024-10-15 13:07:01.211231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.937 [2024-10-15 13:07:01.213760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.937 [2024-10-15 13:07:01.223426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.937 [2024-10-15 13:07:01.223856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.937 [2024-10-15 13:07:01.223901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.937 [2024-10-15 13:07:01.223925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.938 [2024-10-15 13:07:01.224506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.938 [2024-10-15 13:07:01.224745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.938 [2024-10-15 13:07:01.224755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.938 [2024-10-15 13:07:01.224762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.938 [2024-10-15 13:07:01.227423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.938 [2024-10-15 13:07:01.236431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.938 [2024-10-15 13:07:01.236930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.938 [2024-10-15 13:07:01.236975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.938 [2024-10-15 13:07:01.237000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.938 [2024-10-15 13:07:01.237572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.938 [2024-10-15 13:07:01.237748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.938 [2024-10-15 13:07:01.237759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.938 [2024-10-15 13:07:01.237765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.938 [2024-10-15 13:07:01.240349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.938 [2024-10-15 13:07:01.249223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.938 [2024-10-15 13:07:01.249562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.938 [2024-10-15 13:07:01.249579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:40.938 [2024-10-15 13:07:01.249586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:40.938 [2024-10-15 13:07:01.249761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:40.938 [2024-10-15 13:07:01.249922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.938 [2024-10-15 13:07:01.249932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.938 [2024-10-15 13:07:01.249938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.938 [2024-10-15 13:07:01.252522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.198 [2024-10-15 13:07:01.262118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.198 [2024-10-15 13:07:01.262468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.198 [2024-10-15 13:07:01.262485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.198 [2024-10-15 13:07:01.262492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.198 [2024-10-15 13:07:01.262667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.198 [2024-10-15 13:07:01.262836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.198 [2024-10-15 13:07:01.262845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.198 [2024-10-15 13:07:01.262852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.198 [2024-10-15 13:07:01.265463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.198 [2024-10-15 13:07:01.274846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.198 [2024-10-15 13:07:01.275255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.198 [2024-10-15 13:07:01.275293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.198 [2024-10-15 13:07:01.275318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.198 [2024-10-15 13:07:01.275849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.198 [2024-10-15 13:07:01.276017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.198 [2024-10-15 13:07:01.276025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.198 [2024-10-15 13:07:01.276031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.198 [2024-10-15 13:07:01.278551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.198 [2024-10-15 13:07:01.287557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.198 [2024-10-15 13:07:01.287973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.198 [2024-10-15 13:07:01.288011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.198 [2024-10-15 13:07:01.288037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.198 [2024-10-15 13:07:01.288632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.198 [2024-10-15 13:07:01.288831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.198 [2024-10-15 13:07:01.288841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.198 [2024-10-15 13:07:01.288847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.198 [2024-10-15 13:07:01.291365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.198 [2024-10-15 13:07:01.300278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.300670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.300687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.300695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.300854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.301013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.301022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.301028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.303644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.313013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.313432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.313448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.313456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.313623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.313784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.313793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.313800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.316322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.325843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.326254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.326271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.326278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.326437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.326596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.326614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.326620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.329139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.338742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.339159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.339202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.339227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.339799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.339969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.339979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.339985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.342663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.351685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.352111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.352155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.352178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.352773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.353269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.353279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.353285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.355804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.364535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.364955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.364972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.364982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.365141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.365301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.365310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.365315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.367841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.377356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.377769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.377787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.377796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.377963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.378132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.378141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.378148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.380880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.390390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.390836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.390854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.390863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.391061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.391234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.391244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.391250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.393969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.403390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.403808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.403825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.403833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.404002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.404170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.404182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.404189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.406862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.416116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.416463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.416480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.416487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.416653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.416813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.416823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.416829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.199 [2024-10-15 13:07:01.419353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.199 [2024-10-15 13:07:01.428873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.199 [2024-10-15 13:07:01.429267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.199 [2024-10-15 13:07:01.429311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.199 [2024-10-15 13:07:01.429334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.199 [2024-10-15 13:07:01.429931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.199 [2024-10-15 13:07:01.430284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.199 [2024-10-15 13:07:01.430293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.199 [2024-10-15 13:07:01.430299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.432822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.441716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.442140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.442157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.442166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.442334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.442503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.442512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.442519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.445066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.454436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.454825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.454842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.454849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.455008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.455167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.455176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.455182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.457755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.467171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.467564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.467581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.467589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.467774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.467942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.467952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.467958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.470539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.479910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.480322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.480339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.480346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.480504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.480668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.480678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.480684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.483195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.492789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.493220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.493237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.493244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.493406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.493566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.493575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.493581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.496108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.505616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.506035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.506079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.506103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.506618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.506779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.506787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.506793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.200 [2024-10-15 13:07:01.509312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.200 [2024-10-15 13:07:01.518566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.200 [2024-10-15 13:07:01.518958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.200 [2024-10-15 13:07:01.518975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.200 [2024-10-15 13:07:01.518983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.200 [2024-10-15 13:07:01.519151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.200 [2024-10-15 13:07:01.519319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.200 [2024-10-15 13:07:01.519328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.200 [2024-10-15 13:07:01.519335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.521978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.531399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.461 [2024-10-15 13:07:01.531817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-15 13:07:01.531835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.461 [2024-10-15 13:07:01.531843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.461 [2024-10-15 13:07:01.532013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.461 [2024-10-15 13:07:01.532181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.461 [2024-10-15 13:07:01.532191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.461 [2024-10-15 13:07:01.532201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.534786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.544323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.461 [2024-10-15 13:07:01.544706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-15 13:07:01.544723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.461 [2024-10-15 13:07:01.544731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.461 [2024-10-15 13:07:01.544890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.461 [2024-10-15 13:07:01.545049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.461 [2024-10-15 13:07:01.545058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.461 [2024-10-15 13:07:01.545064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.547590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.557116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.461 [2024-10-15 13:07:01.557524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-15 13:07:01.557541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.461 [2024-10-15 13:07:01.557549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.461 [2024-10-15 13:07:01.557724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.461 [2024-10-15 13:07:01.557894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.461 [2024-10-15 13:07:01.557904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.461 [2024-10-15 13:07:01.557910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.560482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.569956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.461 [2024-10-15 13:07:01.570367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-15 13:07:01.570383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.461 [2024-10-15 13:07:01.570391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.461 [2024-10-15 13:07:01.570550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.461 [2024-10-15 13:07:01.570716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.461 [2024-10-15 13:07:01.570726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.461 [2024-10-15 13:07:01.570732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.573297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.582705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.461 [2024-10-15 13:07:01.583050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-15 13:07:01.583065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.461 [2024-10-15 13:07:01.583072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.461 [2024-10-15 13:07:01.583231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.461 [2024-10-15 13:07:01.583390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.461 [2024-10-15 13:07:01.583399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.461 [2024-10-15 13:07:01.583406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.461 [2024-10-15 13:07:01.585933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.461 [2024-10-15 13:07:01.595446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.595866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.595911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.595935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.596489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.596672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.596681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.596688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.602256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.610475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.610990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.611012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.611023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.611277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.611531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.611544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.611553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.615614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.623486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.623908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.623953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.623977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.624481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.624662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.624672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.624678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.627344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.636398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.636838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.636856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.636864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.637037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.637210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.637220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.637226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.639985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.649448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.649828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.649846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.649854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.650028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.650201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.650211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.650219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.652983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.662484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.662863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.662907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.662932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.663511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.663989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.663999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.664005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.666672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.675471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.675890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.675936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.675960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.676541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.676736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.676744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.676750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.679269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.688189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.688598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.688619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.688626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.688786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.688946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.688955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.688961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.691486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.701096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.701512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.701556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.701581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.702060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.702230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.702238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.702244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.704806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.713821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.714231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.714248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.714258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.714417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.714577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.714586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.462 [2024-10-15 13:07:01.714592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.462 [2024-10-15 13:07:01.717119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.462 [2024-10-15 13:07:01.726635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.462 [2024-10-15 13:07:01.727070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-15 13:07:01.727114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.462 [2024-10-15 13:07:01.727137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.462 [2024-10-15 13:07:01.727735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.462 [2024-10-15 13:07:01.727918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.462 [2024-10-15 13:07:01.727928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.463 [2024-10-15 13:07:01.727934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.463 [2024-10-15 13:07:01.730453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.463 [2024-10-15 13:07:01.739467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.463 [2024-10-15 13:07:01.739835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-15 13:07:01.739880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.463 [2024-10-15 13:07:01.739903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.463 [2024-10-15 13:07:01.740485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.463 [2024-10-15 13:07:01.741054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.463 [2024-10-15 13:07:01.741062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.463 [2024-10-15 13:07:01.741068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.463 [2024-10-15 13:07:01.743682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.463 [2024-10-15 13:07:01.752315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.463 [2024-10-15 13:07:01.752718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-15 13:07:01.752735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.463 [2024-10-15 13:07:01.752742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.463 [2024-10-15 13:07:01.752902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.463 [2024-10-15 13:07:01.753061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.463 [2024-10-15 13:07:01.753074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.463 [2024-10-15 13:07:01.753079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.463 [2024-10-15 13:07:01.755606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.463 [2024-10-15 13:07:01.765133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.463 [2024-10-15 13:07:01.765540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-15 13:07:01.765576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.463 [2024-10-15 13:07:01.765616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.463 [2024-10-15 13:07:01.766128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.463 [2024-10-15 13:07:01.766288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.463 [2024-10-15 13:07:01.766296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.463 [2024-10-15 13:07:01.766302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.463 [2024-10-15 13:07:01.768825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.463 [2024-10-15 13:07:01.777924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.463 [2024-10-15 13:07:01.778379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-15 13:07:01.778422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.463 [2024-10-15 13:07:01.778445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.463 [2024-10-15 13:07:01.778868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.463 [2024-10-15 13:07:01.779038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.463 [2024-10-15 13:07:01.779047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.463 [2024-10-15 13:07:01.779054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.724 [2024-10-15 13:07:01.785281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.724 [2024-10-15 13:07:01.792842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.724 [2024-10-15 13:07:01.793341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.724 [2024-10-15 13:07:01.793363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.724 [2024-10-15 13:07:01.793374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.724 [2024-10-15 13:07:01.793637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.724 [2024-10-15 13:07:01.793892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.724 [2024-10-15 13:07:01.793906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.724 [2024-10-15 13:07:01.793915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.724 [2024-10-15 13:07:01.797965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.724 [2024-10-15 13:07:01.805840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.724 [2024-10-15 13:07:01.806271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.724 [2024-10-15 13:07:01.806315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.724 [2024-10-15 13:07:01.806339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.724 [2024-10-15 13:07:01.806932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.724 [2024-10-15 13:07:01.807146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.724 [2024-10-15 13:07:01.807155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.724 [2024-10-15 13:07:01.807162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.724 [2024-10-15 13:07:01.809869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.724 [2024-10-15 13:07:01.818639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.724 [2024-10-15 13:07:01.819065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.724 [2024-10-15 13:07:01.819111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.724 [2024-10-15 13:07:01.819135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.724 [2024-10-15 13:07:01.819729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.724 [2024-10-15 13:07:01.820182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.724 [2024-10-15 13:07:01.820191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.724 [2024-10-15 13:07:01.820196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.724 [2024-10-15 13:07:01.822717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.724 [2024-10-15 13:07:01.831482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.724 [2024-10-15 13:07:01.831914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.724 [2024-10-15 13:07:01.831958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.724 [2024-10-15 13:07:01.831982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.832563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.833159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.833186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.833208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.835752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.844411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.844825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.844868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.844892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.845100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.845261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.845271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.845277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.847800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.857174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.857557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.857574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.857582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.857747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.857907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.857916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.857922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.860530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.869902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.870309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.870326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.870332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.870492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.870657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.870667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.870673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.873193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.882709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.883020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.883037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.883044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.883202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.883361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.883371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.883380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.885908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.895750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.896173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.896191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.896198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.896372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.896546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.896555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.896563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.899310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.908710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.909039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.909057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.909065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.909239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.909413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.909423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.909430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.912180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.921787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.922197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.922238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.922264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.922855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.923440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.923465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.923485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.926178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.934668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.935069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.935112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.935136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.935611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.935780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.935789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.935795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.938474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.947404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.947786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.947804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.947811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.947970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.948129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.948139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.948145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.950678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.725 [2024-10-15 13:07:01.960278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.725 [2024-10-15 13:07:01.960674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.725 [2024-10-15 13:07:01.960719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.725 [2024-10-15 13:07:01.960743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.725 [2024-10-15 13:07:01.961307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.725 [2024-10-15 13:07:01.961467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.725 [2024-10-15 13:07:01.961477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.725 [2024-10-15 13:07:01.961483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.725 [2024-10-15 13:07:01.964022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:01.973088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:01.973476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:01.973493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:01.973500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:01.973687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:01.973857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:01.973867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:01.973873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:01.976430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:01.986012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:01.986405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:01.986450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:01.986474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:01.987070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:01.987642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:01.987652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:01.987658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:01.990177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:01.998827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:01.999213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:01.999231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:01.999238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:01.999397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:01.999557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:01.999566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:01.999572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:02.002116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:02.011651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:02.012034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:02.012051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:02.012059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:02.012218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:02.012378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:02.012387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:02.012393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:02.014923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:02.024451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:02.024853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:02.024898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:02.024922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:02.025502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:02.025697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:02.025707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:02.025713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:02.028232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.726 [2024-10-15 13:07:02.037161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.726 [2024-10-15 13:07:02.037540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.726 [2024-10-15 13:07:02.037556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.726 [2024-10-15 13:07:02.037564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.726 [2024-10-15 13:07:02.037740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.726 [2024-10-15 13:07:02.037909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.726 [2024-10-15 13:07:02.037919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.726 [2024-10-15 13:07:02.037925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.726 [2024-10-15 13:07:02.040551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.987 [2024-10-15 13:07:02.050111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.987 [2024-10-15 13:07:02.050405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.987 [2024-10-15 13:07:02.050424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.987 [2024-10-15 13:07:02.050432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.987 [2024-10-15 13:07:02.050606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.987 [2024-10-15 13:07:02.050776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.987 [2024-10-15 13:07:02.050787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.987 [2024-10-15 13:07:02.050794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.987 [2024-10-15 13:07:02.053433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.987 [2024-10-15 13:07:02.062916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.987 [2024-10-15 13:07:02.063283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.987 [2024-10-15 13:07:02.063323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.987 [2024-10-15 13:07:02.063356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.987 [2024-10-15 13:07:02.063875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.987 [2024-10-15 13:07:02.064037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.987 [2024-10-15 13:07:02.064046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.987 [2024-10-15 13:07:02.064052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.987 [2024-10-15 13:07:02.066574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.987 [2024-10-15 13:07:02.075848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.987 [2024-10-15 13:07:02.076445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.987 [2024-10-15 13:07:02.076496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.987 [2024-10-15 13:07:02.076521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.987 [2024-10-15 13:07:02.077013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.987 [2024-10-15 13:07:02.077177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.077188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.077194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 7224.00 IOPS, 28.22 MiB/s [2024-10-15T11:07:02.307Z] [2024-10-15 13:07:02.080886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.088685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.088954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.088970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.088977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.089136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.089295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.089305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.089311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.091847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.101537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.101840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.101885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.101912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.102495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.102975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.102986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.102992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.105674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.114596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.114931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.114948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.114956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.115129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.115302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.115312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.115318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.118072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.127578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.127915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.127932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.127940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.128108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.128278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.128288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.128295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.130969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.140621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.141027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.141070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.141094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.141686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.141920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.141929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.141936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.144555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.153368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.153779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.153798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.153806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.153974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.154142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.154152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.154158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.156885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.166387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.166711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.166729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.166737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.166910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.167082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.167092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.167098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.169837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.179274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.179629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.179647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.179655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.179822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.179991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.180000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.180007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.182680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.192006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.192370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.192387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.192398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.192558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.988 [2024-10-15 13:07:02.192724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.988 [2024-10-15 13:07:02.192734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.988 [2024-10-15 13:07:02.192740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.988 [2024-10-15 13:07:02.195267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.988 [2024-10-15 13:07:02.204947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.988 [2024-10-15 13:07:02.205272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.988 [2024-10-15 13:07:02.205290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.988 [2024-10-15 13:07:02.205297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.988 [2024-10-15 13:07:02.205466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.205640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.205650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.205657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.208320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.217913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.218312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.218329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.218337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.218505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.218677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.218687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.218693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.221233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.230773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.231193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.231237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.231260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.231855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.232320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.232332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.232339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.234869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.243685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.243968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.243984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.243992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.244150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.244310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.244319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.244325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.246854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.256759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.257042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.257059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.257067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.257240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.257412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.257423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.257430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.260171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.269745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.270088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.270105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.270113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.270285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.270463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.270473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.270479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.273169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.282679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.282961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.282978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.282986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.283155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.283324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.283333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.283340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.286014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.989 [2024-10-15 13:07:02.295640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.989 [2024-10-15 13:07:02.296041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.989 [2024-10-15 13:07:02.296059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:41.989 [2024-10-15 13:07:02.296067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:41.989 [2024-10-15 13:07:02.296239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:41.989 [2024-10-15 13:07:02.296411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.989 [2024-10-15 13:07:02.296421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.989 [2024-10-15 13:07:02.296427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.989 [2024-10-15 13:07:02.299147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.249 [2024-10-15 13:07:02.308623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.249 [2024-10-15 13:07:02.309044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.249 [2024-10-15 13:07:02.309089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.249 [2024-10-15 13:07:02.309112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.249 [2024-10-15 13:07:02.309587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.249 [2024-10-15 13:07:02.309767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.249 [2024-10-15 13:07:02.309777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.249 [2024-10-15 13:07:02.309785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.249 [2024-10-15 13:07:02.312482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.249 [2024-10-15 13:07:02.321743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.249 [2024-10-15 13:07:02.322013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.249 [2024-10-15 13:07:02.322029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.249 [2024-10-15 13:07:02.322037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.249 [2024-10-15 13:07:02.322214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.249 [2024-10-15 13:07:02.322388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.249 [2024-10-15 13:07:02.322398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.249 [2024-10-15 13:07:02.322405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.249 [2024-10-15 13:07:02.325156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.249 [2024-10-15 13:07:02.334708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.334997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.335014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.335022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.335194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.335367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.335377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.335383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.338133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.347580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.347968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.347985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.347992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.348152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.348335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.348344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.348350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.351021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.360414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.360827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.360843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.360851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.361009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.361169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.361178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.361187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.363728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.373273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.373621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.373638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.373646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.373822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.373983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.373992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.373998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.376552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.386096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.386425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.386442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.386449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.386612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.386772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.386782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.386788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.389316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.398857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.399211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.399227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.399235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.399394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.399554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.399563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.399569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.402092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.411806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.412088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.412108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.412116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.412284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.412452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.412461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.412468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.415207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.424910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.425323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.425366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.425390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.425782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.425952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.425961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.425968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.428672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.437812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.438180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.438224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.438248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.438811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.438986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.438995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.439001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.441685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.450779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.451190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.451206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.451214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.451373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.451536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.451545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.451551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.250 [2024-10-15 13:07:02.454081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.250 [2024-10-15 13:07:02.463559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.250 [2024-10-15 13:07:02.463937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.250 [2024-10-15 13:07:02.463954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.250 [2024-10-15 13:07:02.463961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.250 [2024-10-15 13:07:02.464119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.250 [2024-10-15 13:07:02.464278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.250 [2024-10-15 13:07:02.464287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.250 [2024-10-15 13:07:02.464293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.466819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.476345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.476660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.476676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.476683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.476843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.477003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.477011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.477017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.479539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.489205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.489613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.489658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.489681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.490263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.490820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.490829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.490835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.493449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.501969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.502338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.502355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.502362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.502521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.502703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.502713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.502720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.505306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.514876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.515275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.515291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.515299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.515457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.515624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.515633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.515640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.518162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.527695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.528086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.528103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.528110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.528269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.528429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.528438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.528444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.531065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.540623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.540967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.541016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.541048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.541645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.542151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.542160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.542166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.544758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.553387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.553686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.553703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.553710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.553870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.554030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.554039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.554045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.556635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.251 [2024-10-15 13:07:02.566201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.251 [2024-10-15 13:07:02.566615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.251 [2024-10-15 13:07:02.566661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.251 [2024-10-15 13:07:02.566684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.251 [2024-10-15 13:07:02.567144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.251 [2024-10-15 13:07:02.567314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.251 [2024-10-15 13:07:02.567323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.251 [2024-10-15 13:07:02.567329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.251 [2024-10-15 13:07:02.570003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.579103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.579508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.579551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.579575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.580095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.580256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.580271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.580277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.582801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.591871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.592276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.592321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.592344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.592940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.593355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.593364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.593370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.595889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.604664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.605033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.605050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.605057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.605216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.605375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.605384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.605390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.607915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.617438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.617839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.617883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.617906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.618415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.618576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.618585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.618592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.621118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.630194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.630564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.630580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.630586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.630751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.630911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.630921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.630927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.633449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.643052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.643438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.643455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.643462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.643642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.643811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.643821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.643827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.646378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.656000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.656390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.656407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.656414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.656573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.656738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.656747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.656753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.659274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.512 [2024-10-15 13:07:02.668748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.512 [2024-10-15 13:07:02.669083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.512 [2024-10-15 13:07:02.669101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.512 [2024-10-15 13:07:02.669110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.512 [2024-10-15 13:07:02.669283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.512 [2024-10-15 13:07:02.669452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.512 [2024-10-15 13:07:02.669461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.512 [2024-10-15 13:07:02.669468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.512 [2024-10-15 13:07:02.672208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.681738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.682163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.682181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.682189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.682361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.682534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.682543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.682550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.685299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.694663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.695063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.695080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.695087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.695246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.695405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.695414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.695420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.697947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.707385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.707748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.707764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.707771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.707930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.708089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.708098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.708107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.710655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.720235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.720608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.720625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.720633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.720793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.720953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.720962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.720968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.723495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.733008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.733397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.733437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.733462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.734059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.734311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.734320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.734326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.736847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.745856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.746179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.746224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.746248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.746842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.747425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.747451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.747483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.750006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.758633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.759034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.759085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.759110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.759618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.759779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.759788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.759794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.762406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.771489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.771885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.771902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.771909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.772067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.772227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.772236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.772242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.774844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.784236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.784625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.784670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.784694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.785273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.785507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.785515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.785521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.788038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.796955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.797351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.797368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.797376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.797534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.797702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.797711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.513 [2024-10-15 13:07:02.797717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.513 [2024-10-15 13:07:02.800232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.513 [2024-10-15 13:07:02.809742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.513 [2024-10-15 13:07:02.810141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.513 [2024-10-15 13:07:02.810185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.513 [2024-10-15 13:07:02.810209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.513 [2024-10-15 13:07:02.810653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.513 [2024-10-15 13:07:02.810814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.513 [2024-10-15 13:07:02.810823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.514 [2024-10-15 13:07:02.810830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.514 [2024-10-15 13:07:02.813350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.514 [2024-10-15 13:07:02.822566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.514 [2024-10-15 13:07:02.822959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.514 [2024-10-15 13:07:02.822976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.514 [2024-10-15 13:07:02.822983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.514 [2024-10-15 13:07:02.823142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.514 [2024-10-15 13:07:02.823301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.514 [2024-10-15 13:07:02.823310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.514 [2024-10-15 13:07:02.823316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.514 [2024-10-15 13:07:02.825842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.835427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.835836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.835881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.835905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.836466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.836631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.836641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.836647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.839341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.848148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.848548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.848592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.848631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.849213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.849781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.849799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.849813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.856052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.863100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.863614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.863658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.863681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.864262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.864779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.864791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.864802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.868869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.876097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.876508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.876551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.876575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.877171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.877760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.877770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.877776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.880441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.888951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.889346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.889362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.889372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.889531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.889696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.889705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.889711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.892230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.901760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.902149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.902165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.902172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.902331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.902491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.902499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.902505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.905121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.914583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.914974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.914991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.914999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.915159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.915319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.915328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.915334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.917859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.927385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.927755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.927773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.927780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.927949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.928118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.928130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.775 [2024-10-15 13:07:02.928137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.775 [2024-10-15 13:07:02.930861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.775 [2024-10-15 13:07:02.940415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.775 [2024-10-15 13:07:02.940730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.775 [2024-10-15 13:07:02.940748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.775 [2024-10-15 13:07:02.940755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.775 [2024-10-15 13:07:02.940928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.775 [2024-10-15 13:07:02.941100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.775 [2024-10-15 13:07:02.941110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:02.941116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:02.943868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:02.953337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:02.953766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:02.953811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:02.953835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:02.954414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:02.955002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:02.955011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:02.955018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:02.957716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:02.966226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:02.966559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:02.966575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:02.966583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:02.966755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:02.966923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:02.966933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:02.966939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:02.969516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:02.979057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:02.979469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:02.979514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:02.979538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:02.980058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:02.980219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:02.980229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:02.980235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:02.982756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:02.991864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:02.992273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:02.992291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:02.992299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:02.992471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:02.992649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:02.992659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:02.992665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:02.995405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.004616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.005008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:03.005025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:03.005032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:03.005191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:03.005350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:03.005359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:03.005365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:03.007893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.017459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.017881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:03.017925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:03.017949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:03.018537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:03.019117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:03.019126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:03.019132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:03.021658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.030286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.030695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:03.030741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:03.030765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:03.031141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:03.031302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:03.031311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:03.031317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:03.033850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.043237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.043625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:03.043642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:03.043649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:03.043808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:03.043967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:03.043976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:03.043982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:03.046538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.056063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.056456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.776 [2024-10-15 13:07:03.056473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.776 [2024-10-15 13:07:03.056480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.776 [2024-10-15 13:07:03.056646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.776 [2024-10-15 13:07:03.056807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.776 [2024-10-15 13:07:03.056817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.776 [2024-10-15 13:07:03.056828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.776 [2024-10-15 13:07:03.059347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.776 [2024-10-15 13:07:03.068912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.776 [2024-10-15 13:07:03.069232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.777 [2024-10-15 13:07:03.069248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.777 [2024-10-15 13:07:03.069256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.777 [2024-10-15 13:07:03.069414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.777 [2024-10-15 13:07:03.069575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.777 [2024-10-15 13:07:03.069584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.777 [2024-10-15 13:07:03.069590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.777 [2024-10-15 13:07:03.072119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.777 [2024-10-15 13:07:03.081734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.777 [2024-10-15 13:07:03.082135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.777 [2024-10-15 13:07:03.082152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.777 [2024-10-15 13:07:03.082160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.777 5779.20 IOPS, 22.57 MiB/s [2024-10-15T11:07:03.096Z] [2024-10-15 13:07:03.083475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.777 [2024-10-15 13:07:03.083640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.777 [2024-10-15 13:07:03.083650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.777 [2024-10-15 13:07:03.083656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.777 [2024-10-15 13:07:03.086172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.777 [2024-10-15 13:07:03.094609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.777 [2024-10-15 13:07:03.094939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.777 [2024-10-15 13:07:03.094956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:42.777 [2024-10-15 13:07:03.094964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:42.777 [2024-10-15 13:07:03.095131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:42.777 [2024-10-15 13:07:03.095299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.777 [2024-10-15 13:07:03.095308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.777 [2024-10-15 13:07:03.095315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.037 [2024-10-15 13:07:03.097982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.037 [2024-10-15 13:07:03.107458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.037 [2024-10-15 13:07:03.107827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-10-15 13:07:03.107847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.037 [2024-10-15 13:07:03.107855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.037 [2024-10-15 13:07:03.108014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.037 [2024-10-15 13:07:03.108173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.037 [2024-10-15 13:07:03.108182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.037 [2024-10-15 13:07:03.108188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.037 [2024-10-15 13:07:03.110708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.037 [2024-10-15 13:07:03.120304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.037 [2024-10-15 13:07:03.120623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.120640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.120648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.120806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.120964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.120974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.120980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.123498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.133011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.133334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.133350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.133358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.133516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.133680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.133690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.133696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.136209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.145801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.146209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.146251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.146275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.146869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.147382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.147391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.147397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.149912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.158626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.158946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.158963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.158971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.159129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.159289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.159298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.159304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.161846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.171431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.171773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.171789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.171796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.171955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.172115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.172124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.172130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.174705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.184272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.184678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.184695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.184703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.184870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.185038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.185048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.185054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.187788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.197286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.197648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.197666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.197673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.197846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.198020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.198029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.198035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.200731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.210032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.210354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.210370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.210377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.210536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.210722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.210732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.210738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.213400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.222863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.223229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.223245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.223253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.223411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.223570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.223579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.223585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.226113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.235636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.038 [2024-10-15 13:07:03.236029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-10-15 13:07:03.236045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.038 [2024-10-15 13:07:03.236060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.038 [2024-10-15 13:07:03.236220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.038 [2024-10-15 13:07:03.236380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.038 [2024-10-15 13:07:03.236389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.038 [2024-10-15 13:07:03.236395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.038 [2024-10-15 13:07:03.238924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.038 [2024-10-15 13:07:03.248365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.248763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.248807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.248831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.249410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.249873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.249883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.249889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.252419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.261195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.261590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.261610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.261618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.261778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.261937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.261947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.261952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.264562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.273944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.274338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.274355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.274363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.274522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.274686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.274699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.274705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.277227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.286734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.287130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.287147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.287154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.287314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.287474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.287483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.287489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.290019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.299544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.299940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.299957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.299965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.300123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.300282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.300291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.300296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.302823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.312366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.312771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.312788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.312796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.312965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.313123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.313132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.313138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.315659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.325346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.325740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.325758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.325765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.325938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.326112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.326122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.326128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.328870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.338414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.338819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.338836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.338845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.339017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.339190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.339199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.339206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.341928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.039 [2024-10-15 13:07:03.351435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.039 [2024-10-15 13:07:03.351839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-10-15 13:07:03.351857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.039 [2024-10-15 13:07:03.351865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.039 [2024-10-15 13:07:03.352032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.039 [2024-10-15 13:07:03.352200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.039 [2024-10-15 13:07:03.352210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.039 [2024-10-15 13:07:03.352216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.039 [2024-10-15 13:07:03.354891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.364463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.364873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.364890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.364898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.365074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.365247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.365257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.365264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.367971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.377538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.377918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.377936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.377943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.378111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.378279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.378289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.378295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.380970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.390622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.391040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.391084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.391108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.391704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.392160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.392170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.392176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.394844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.403488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.403914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.403932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.403940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.404108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.404276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.404285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.404295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.406971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.416273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.416676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.416722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.416747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.417165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.417326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.417335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.417341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.419869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.429093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.429457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.429473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.429481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.429648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.429809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.429818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.429825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.432346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.441939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.442337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.442380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.442404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.442907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.443077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.443086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.443093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.445851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.455003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.455412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.455428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.455437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.455616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.455790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.455800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.455807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.458544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.468027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.300 [2024-10-15 13:07:03.468429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-10-15 13:07:03.468473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.300 [2024-10-15 13:07:03.468497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.300 [2024-10-15 13:07:03.469001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.300 [2024-10-15 13:07:03.469170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.300 [2024-10-15 13:07:03.469179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.300 [2024-10-15 13:07:03.469185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.300 [2024-10-15 13:07:03.471846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.300 [2024-10-15 13:07:03.480756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.481070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.481087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.481094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.481254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.481414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.481424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.481431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.483962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.493495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.493911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.493956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.493980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.494375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.494539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.494548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.494555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.497087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.506436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.506755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.506772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.506778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.506937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.507096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.507105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.507111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.509644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.519251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.519564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.519582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.519589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.519755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.519916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.519925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.519932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.522454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.532002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.532432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.532477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.532501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.533093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.533516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.533526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.533532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.536297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.545050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.545450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.545468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.545476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.545650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.545818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.545828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.545834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.548499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.557786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.558103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.558119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.558126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.558285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.558445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.558454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.558460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.560994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.570638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.570945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.570962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.570970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.571129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.571289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.571299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.571305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.573926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.301 [2024-10-15 13:07:03.583494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.301 [2024-10-15 13:07:03.583765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-10-15 13:07:03.583781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.301 [2024-10-15 13:07:03.583792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.301 [2024-10-15 13:07:03.583952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.301 [2024-10-15 13:07:03.584111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.301 [2024-10-15 13:07:03.584120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.301 [2024-10-15 13:07:03.584126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.301 [2024-10-15 13:07:03.586661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.302 [2024-10-15 13:07:03.596344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.302 [2024-10-15 13:07:03.596735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-10-15 13:07:03.596782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.302 [2024-10-15 13:07:03.596806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.302 [2024-10-15 13:07:03.597386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.302 [2024-10-15 13:07:03.597722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.302 [2024-10-15 13:07:03.597741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.302 [2024-10-15 13:07:03.597755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.302 [2024-10-15 13:07:03.603998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1366159 Killed "${NVMF_APP[@]}" "$@" 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:43.302 [2024-10-15 13:07:03.611226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:43.302 [2024-10-15 13:07:03.611716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-10-15 13:07:03.611739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.302 [2024-10-15 13:07:03.611750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.302 [2024-10-15 13:07:03.612004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.302 [2024-10-15 13:07:03.612262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.302 [2024-10-15 13:07:03.612275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.302 [2024-10-15 13:07:03.612285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.302 [2024-10-15 13:07:03.616344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1367405 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1367405 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1367405 ']' 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.302 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.562 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.562 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.562 [2024-10-15 13:07:03.624333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.562 [2024-10-15 13:07:03.624748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-10-15 13:07:03.624765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.562 [2024-10-15 13:07:03.624775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.562 [2024-10-15 13:07:03.624949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.562 [2024-10-15 13:07:03.625122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.562 [2024-10-15 13:07:03.625131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.562 [2024-10-15 13:07:03.625138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.562 [2024-10-15 13:07:03.627892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.562 [2024-10-15 13:07:03.637303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.562 [2024-10-15 13:07:03.637666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.637685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.637693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.637867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.638042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.638051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.638058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.640810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.650360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.650765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.650793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.650801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.650970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.651141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.651152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.651159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.653890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.663368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.663696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.663714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.663722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.663895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.664069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.664079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.664085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.666609] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:26:43.563 [2024-10-15 13:07:03.666649] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.563 [2024-10-15 13:07:03.666777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.676424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.676738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.676755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.676763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.676932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.677100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.677110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.677116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.679788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.689450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.689739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.689756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.689764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.689955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.690130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.690142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.690150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.692855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.702440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.702824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.702841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.702849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.703021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.703195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.703205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.703212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.705959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.715515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.715927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.715944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.715952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.716123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.716296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.716306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.716313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.719063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.563 [2024-10-15 13:07:03.728493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.563 [2024-10-15 13:07:03.728902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-10-15 13:07:03.728920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.563 [2024-10-15 13:07:03.728928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.563 [2024-10-15 13:07:03.729096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.563 [2024-10-15 13:07:03.729267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.563 [2024-10-15 13:07:03.729276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.563 [2024-10-15 13:07:03.729283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.563 [2024-10-15 13:07:03.731953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.737693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.564 [2024-10-15 13:07:03.741458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.741838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.741855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.741864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.742032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.742201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.742209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.742215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.744919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.754415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.754809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.754827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.754834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.755003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.755171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.755180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.755187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.757859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.767368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.767687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.767704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.767712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.767881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.768048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.768057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.768064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.770746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.779864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.564 [2024-10-15 13:07:03.779890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.564 [2024-10-15 13:07:03.779898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.564 [2024-10-15 13:07:03.779907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.564 [2024-10-15 13:07:03.779912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.564 [2024-10-15 13:07:03.780292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.780702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.780721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.780730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.780909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.781079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.781088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.781095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.781378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.564 [2024-10-15 13:07:03.781486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.564 [2024-10-15 13:07:03.781488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.564 [2024-10-15 13:07:03.783855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.793260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.793719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.793741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.793750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.793925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.794100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.794109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.794117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.796867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.806254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.806704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.806726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.806736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.806911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.807085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.807095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.807102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.809845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.819256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.564 [2024-10-15 13:07:03.819657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-10-15 13:07:03.819678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.564 [2024-10-15 13:07:03.819687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.564 [2024-10-15 13:07:03.819860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.564 [2024-10-15 13:07:03.820034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.564 [2024-10-15 13:07:03.820043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.564 [2024-10-15 13:07:03.820050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.564 [2024-10-15 13:07:03.822789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.564 [2024-10-15 13:07:03.832349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.565 [2024-10-15 13:07:03.832691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.565 [2024-10-15 13:07:03.832712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.565 [2024-10-15 13:07:03.832721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.565 [2024-10-15 13:07:03.832895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.565 [2024-10-15 13:07:03.833069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.565 [2024-10-15 13:07:03.833078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.565 [2024-10-15 13:07:03.833086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.565 [2024-10-15 13:07:03.835828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.565 [2024-10-15 13:07:03.845390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.565 [2024-10-15 13:07:03.845763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.565 [2024-10-15 13:07:03.845781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.565 [2024-10-15 13:07:03.845790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.565 [2024-10-15 13:07:03.845964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.565 [2024-10-15 13:07:03.846137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.565 [2024-10-15 13:07:03.846147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.565 [2024-10-15 13:07:03.846154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.565 [2024-10-15 13:07:03.848901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.565 [2024-10-15 13:07:03.858470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.565 [2024-10-15 13:07:03.858812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.565 [2024-10-15 13:07:03.858830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.565 [2024-10-15 13:07:03.858844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.565 [2024-10-15 13:07:03.859017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.565 [2024-10-15 13:07:03.859190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.565 [2024-10-15 13:07:03.859201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.565 [2024-10-15 13:07:03.859208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.565 [2024-10-15 13:07:03.861952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.565 [2024-10-15 13:07:03.871515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.565 [2024-10-15 13:07:03.871808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.565 [2024-10-15 13:07:03.871826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.565 [2024-10-15 13:07:03.871834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.565 [2024-10-15 13:07:03.872008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.565 [2024-10-15 13:07:03.872182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.565 [2024-10-15 13:07:03.872192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.565 [2024-10-15 13:07:03.872199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.565 [2024-10-15 13:07:03.874947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.565 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.565 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:43.565 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:43.565 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.565 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.824 [2024-10-15 13:07:03.884505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.824 [2024-10-15 13:07:03.884845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.824 [2024-10-15 13:07:03.884863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.824 [2024-10-15 13:07:03.884872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.824 [2024-10-15 13:07:03.885045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.824 [2024-10-15 13:07:03.885219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.824 [2024-10-15 13:07:03.885228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.824 [2024-10-15 13:07:03.885235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.824 [2024-10-15 13:07:03.887985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.824 [2024-10-15 13:07:03.897552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.824 [2024-10-15 13:07:03.897891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.824 [2024-10-15 13:07:03.897908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.824 [2024-10-15 13:07:03.897920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.824 [2024-10-15 13:07:03.898094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.824 [2024-10-15 13:07:03.898267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.824 [2024-10-15 13:07:03.898278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.824 [2024-10-15 13:07:03.898284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.824 [2024-10-15 13:07:03.901032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.824 [2024-10-15 13:07:03.910591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.824 [2024-10-15 13:07:03.910920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.824 [2024-10-15 13:07:03.910938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.824 [2024-10-15 13:07:03.910948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.911122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 [2024-10-15 13:07:03.911296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.911306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.911313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.825 [2024-10-15 13:07:03.914059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.825 [2024-10-15 13:07:03.917808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.825 [2024-10-15 13:07:03.923606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.825 [2024-10-15 13:07:03.923937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.825 [2024-10-15 13:07:03.923954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.825 [2024-10-15 13:07:03.923961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.924134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.825 [2024-10-15 13:07:03.924308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.924318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.924325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 [2024-10-15 13:07:03.927074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 [2024-10-15 13:07:03.936626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 [2024-10-15 13:07:03.937007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.825 [2024-10-15 13:07:03.937025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.825 [2024-10-15 13:07:03.937033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.937205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 [2024-10-15 13:07:03.937379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.937389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.937395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 [2024-10-15 13:07:03.940161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 [2024-10-15 13:07:03.949714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 [2024-10-15 13:07:03.950055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.825 [2024-10-15 13:07:03.950073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.825 [2024-10-15 13:07:03.950083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.950256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 [2024-10-15 13:07:03.950430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.950440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.950447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 [2024-10-15 13:07:03.953207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 [2024-10-15 13:07:03.962773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 [2024-10-15 13:07:03.963060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.825 [2024-10-15 13:07:03.963077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.825 [2024-10-15 13:07:03.963085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.963258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 Malloc0 00:26:43.825 [2024-10-15 13:07:03.963431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.963446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.963454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.825 [2024-10-15 13:07:03.966236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.825 [2024-10-15 13:07:03.975813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.825 [2024-10-15 13:07:03.976171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.825 [2024-10-15 13:07:03.976189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5825c0 with addr=10.0.0.2, port=4420 00:26:43.825 [2024-10-15 13:07:03.976197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5825c0 is same with the state(6) to be set 00:26:43.825 [2024-10-15 13:07:03.976369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5825c0 (9): Bad file descriptor 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.825 [2024-10-15 13:07:03.976544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.825 [2024-10-15 13:07:03.976554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.825 [2024-10-15 13:07:03.976560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.825 [2024-10-15 13:07:03.979304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.825 [2024-10-15 13:07:03.987216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.825 [2024-10-15 13:07:03.988848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.825 13:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1366418 00:26:43.825 [2024-10-15 13:07:04.018907] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:45.203 4933.67 IOPS, 19.27 MiB/s [2024-10-15T11:07:06.089Z] 5855.57 IOPS, 22.87 MiB/s [2024-10-15T11:07:07.466Z] 6540.25 IOPS, 25.55 MiB/s [2024-10-15T11:07:08.404Z] 7076.44 IOPS, 27.64 MiB/s [2024-10-15T11:07:09.340Z] 7503.60 IOPS, 29.31 MiB/s [2024-10-15T11:07:10.277Z] 7841.45 IOPS, 30.63 MiB/s [2024-10-15T11:07:11.214Z] 8133.92 IOPS, 31.77 MiB/s [2024-10-15T11:07:12.151Z] 8370.23 IOPS, 32.70 MiB/s [2024-10-15T11:07:13.528Z] 8583.93 IOPS, 33.53 MiB/s 00:26:53.209 Latency(us) 00:26:53.209 [2024-10-15T11:07:13.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.209 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.209 Verification LBA range: start 0x0 length 0x4000 00:26:53.209 Nvme1n1 : 15.01 8760.05 34.22 11215.15 0.00 6388.32 659.26 12982.37 00:26:53.209 [2024-10-15T11:07:13.528Z] =================================================================================================================== 00:26:53.209 [2024-10-15T11:07:13.528Z] Total : 8760.05 34.22 11215.15 0.00 6388.32 659.26 12982.37 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:53.209 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.210 rmmod nvme_tcp 00:26:53.210 rmmod nvme_fabrics 00:26:53.210 rmmod nvme_keyring 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1367405 ']' 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1367405 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1367405 ']' 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1367405 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367405 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367405' 00:26:53.210 killing process with pid 1367405 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1367405 00:26:53.210 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1367405 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.469 13:07:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.374 00:26:55.374 real 0m26.039s 00:26:55.374 user 1m0.558s 00:26:55.374 sys 0m6.854s 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.374 ************************************ 00:26:55.374 END TEST nvmf_bdevperf 00:26:55.374 ************************************ 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.374 13:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.634 ************************************ 00:26:55.634 START TEST nvmf_target_disconnect 00:26:55.634 ************************************ 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:55.634 * Looking for test storage... 00:26:55.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.634 --rc genhtml_branch_coverage=1 00:26:55.634 --rc genhtml_function_coverage=1 00:26:55.634 --rc genhtml_legend=1 00:26:55.634 --rc geninfo_all_blocks=1 00:26:55.634 --rc geninfo_unexecuted_blocks=1 00:26:55.634 00:26:55.634 ' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.634 --rc genhtml_branch_coverage=1 00:26:55.634 --rc genhtml_function_coverage=1 00:26:55.634 --rc genhtml_legend=1 00:26:55.634 --rc geninfo_all_blocks=1 00:26:55.634 --rc geninfo_unexecuted_blocks=1 00:26:55.634 00:26:55.634 ' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.634 --rc genhtml_branch_coverage=1 00:26:55.634 --rc genhtml_function_coverage=1 00:26:55.634 --rc genhtml_legend=1 00:26:55.634 --rc geninfo_all_blocks=1 00:26:55.634 --rc geninfo_unexecuted_blocks=1 00:26:55.634 00:26:55.634 ' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.634 --rc genhtml_branch_coverage=1 00:26:55.634 --rc genhtml_function_coverage=1 00:26:55.634 --rc genhtml_legend=1 00:26:55.634 --rc geninfo_all_blocks=1 00:26:55.634 --rc geninfo_unexecuted_blocks=1 00:26:55.634 00:26:55.634 ' 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.634 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.635 13:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.207 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:02.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:02.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:02.208 Found net devices under 0000:86:00.0: cvl_0_0 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:02.208 Found net devices under 0000:86:00.1: cvl_0_1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:27:02.208 00:27:02.208 --- 10.0.0.2 ping statistics --- 00:27:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.208 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:02.208 00:27:02.208 --- 10.0.0.1 ping statistics --- 00:27:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.208 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:02.208 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 ************************************ 00:27:02.209 START TEST nvmf_target_disconnect_tc1 00:27:02.209 ************************************ 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:02.209 13:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.209 [2024-10-15 13:07:22.018549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-10-15 13:07:22.018589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4b70 with addr=10.0.0.2, port=4420 00:27:02.209 [2024-10-15 13:07:22.018615] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:02.209 [2024-10-15 13:07:22.018628] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:02.209 [2024-10-15 13:07:22.018634] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:02.209 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:02.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:02.209 Initializing NVMe Controllers 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:02.209 00:27:02.209 real 0m0.108s 00:27:02.209 user 0m0.046s 00:27:02.209 sys 0m0.062s 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 ************************************ 00:27:02.209 END TEST nvmf_target_disconnect_tc1 00:27:02.209 ************************************ 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 ************************************ 00:27:02.209 START TEST nvmf_target_disconnect_tc2 00:27:02.209 ************************************ 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1372512 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1372512 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1372512 ']' 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.209 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 [2024-10-15 13:07:22.159323] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:27:02.209 [2024-10-15 13:07:22.159362] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.209 [2024-10-15 13:07:22.230862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.209 [2024-10-15 13:07:22.275925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.209 [2024-10-15 13:07:22.275956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.209 [2024-10-15 13:07:22.275963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.209 [2024-10-15 13:07:22.275969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.209 [2024-10-15 13:07:22.275974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.209 [2024-10-15 13:07:22.277572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:02.209 [2024-10-15 13:07:22.277680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:02.209 [2024-10-15 13:07:22.277796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:02.209 [2024-10-15 13:07:22.277797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:02.779 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.779 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:02.779 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:02.779 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.779 13:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.779 Malloc0 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.779 [2024-10-15 13:07:23.066867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.779 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.780 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.780 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.780 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.780 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.780 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.780 [2024-10-15 13:07:23.099117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1372759 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:03.039 13:07:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.955 13:07:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1372512 00:27:04.955 13:07:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 [2024-10-15 13:07:25.127623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Read completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.955 starting I/O failed 00:27:04.955 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 [2024-10-15 13:07:25.127823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 [2024-10-15 13:07:25.128017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Read completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 Write completed with error (sct=0, sc=8) 00:27:04.956 starting I/O failed 00:27:04.956 [2024-10-15 13:07:25.128205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.956 [2024-10-15 13:07:25.128413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.956 [2024-10-15 13:07:25.128432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.956 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.128608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.128630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.128789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.128805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.128920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.128952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.129060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.129094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.129354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.129387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.129631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.129666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.129962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.129994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.130137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.130149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.130353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.130385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.130613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.130647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.130788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.130820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.130960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.130971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.131222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.131255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.131434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.131468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.131616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.131649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.131861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.131874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.132929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.132957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.133207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.133236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.133416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.133444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.133634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.133664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.133842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.133869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.134005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.134034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.134242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.134270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.134457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.134506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.134787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.134814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.957 [2024-10-15 13:07:25.135034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.957 [2024-10-15 13:07:25.135067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.957 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.135359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.135391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.135649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.135684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.135876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.135921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.136095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.136118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.136235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.136259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.136513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.136545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.136683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.136718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.136915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.136947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.137146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.137180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.137456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.137489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.137748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.137793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.137916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.137949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.138081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.138114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.138415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.138439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.138560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.138584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.138822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.138847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.139094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.139118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.139226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.139249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.139468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.139491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.139647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.139671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.139795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.139829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.140098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.140130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.140440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.140473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.140697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.140733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.140933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.140966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.141097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.141130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.141387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.141411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.141586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.141619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.141845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.141868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.142029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.142052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.142171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.142195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.958 qpair failed and we were unable to recover it. 00:27:04.958 [2024-10-15 13:07:25.142416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.958 [2024-10-15 13:07:25.142440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.142673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.142697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.142809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.142833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.143006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.143040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.143183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.143217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.143351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.143384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.143721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.143805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.144007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.144045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.144342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.144377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.144620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.144654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.144850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.144884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.145065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.145300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.145512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.145675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.145841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.145987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.146020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.146138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.146172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.146298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.146331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.146595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.146668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.146850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.146884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.147066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.147102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.147296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.147328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.147510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.147544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.147748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.147783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.148051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.148084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.148207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.148240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.148444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.148477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.148699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.148734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.148990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.149023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.149246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.149280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.149413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.959 [2024-10-15 13:07:25.149445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.959 qpair failed and we were unable to recover it. 00:27:04.959 [2024-10-15 13:07:25.149638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.149672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.149875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.149908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.150215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.150248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.150456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.150488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.150674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.150709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.150846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.150880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.151074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.151107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.151324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.151358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.151624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.151659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.151865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.151897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.152071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.152105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.152368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.152402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.152697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.152732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.152864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.152897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.153158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.153231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.153508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.153546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.153764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.153800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.153994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.154027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.154258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.154291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.154546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.154579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.154856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.154889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.155107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.155139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.155345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.155378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.155509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.155543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.155744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.155778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.155973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.156005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.156195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.156227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.156442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.156473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.156684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.156721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.156901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.156933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.157137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.157170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.157360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.157392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.960 qpair failed and we were unable to recover it. 00:27:04.960 [2024-10-15 13:07:25.157655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.960 [2024-10-15 13:07:25.157689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.157823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.157855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.158075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.158106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.158248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.158280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.158481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.158515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.158702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.158737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.158977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.159010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.159190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.159224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.159352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.159385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.159624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.159664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.159849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.159881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.160062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.160095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.160380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.160413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.160710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.160745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.160978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.161010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.161263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.161297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.161522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.161556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.161693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.161728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.162016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.162049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.162184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.162217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.162426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.162459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.162721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.162755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.162945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.162978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.163251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.163285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.163469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.163502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.163704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.163739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.961 qpair failed and we were unable to recover it. 00:27:04.961 [2024-10-15 13:07:25.163882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.961 [2024-10-15 13:07:25.163914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.164125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.164157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.164331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.164364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.164542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.164575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.164854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.164889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.165089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.165122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.165248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.165282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.165499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.165532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.165817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.165852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.166046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.166079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.166356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.166389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.166633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.166668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.166817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.166849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.166994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.167027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.167269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.167301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.167502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.167535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.167832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.167867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.168082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.168115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.168355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.168388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.168719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.168754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.169026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.169060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.169256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.169290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.169434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.169467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.169684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.169720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.169867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.169899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.170102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.170134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.170391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.170424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.170714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.170750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.170938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.170971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.171096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.171128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.171404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.171438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.171725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.171759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.171942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.962 [2024-10-15 13:07:25.171976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.962 qpair failed and we were unable to recover it. 00:27:04.962 [2024-10-15 13:07:25.172173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.172206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.172486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.172519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.172798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.172833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.172967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.172999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.173183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.173216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.173441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.173473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.173609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.173642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.173839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.173873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.174070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.174104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.174377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.174410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.174621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.174655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.174897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.174947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.175189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.175222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.175440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.175472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.175593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.175638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.175824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.175856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.176095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.176130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.176420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.176454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.176672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.176714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.176901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.176934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.177111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.177145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.177356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.177387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.177658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.177693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.177906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.177939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.178125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.178157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.178441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.178475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.178677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.178712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.178837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.178869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.179055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.179088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.179360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.179392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.179617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.179651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.179846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.179880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.963 qpair failed and we were unable to recover it. 00:27:04.963 [2024-10-15 13:07:25.180061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-10-15 13:07:25.180095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.180360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.180392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.180624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.180659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.180906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.180940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.181122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.181156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.181281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.181314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.181577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.181619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.181772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.181804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.182090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.182123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.182307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.182339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.182532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.182564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.182859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.182894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.183094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.183125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.183370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.183403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.183650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.183685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.183935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.183968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.184162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.184194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.184460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.184494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.184722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.184757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.184908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.184940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.185183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.185215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.185457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.185490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.185738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.185772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.186023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.186056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.186259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.186292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.186557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.186592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.186794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.186827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.186972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.187009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.187150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.187182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.187496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.187529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.187776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.187810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.187998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.188032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.188322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-10-15 13:07:25.188355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-10-15 13:07:25.188628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.188662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.188909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.188943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.189078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.189111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.189323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.189356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.189669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.189703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.189949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.189982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.190173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.190205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.190324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.190358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.190589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.190632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.190906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.190939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.191072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.191106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.191290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.191323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.191568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.191610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.191908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.191941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.192146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.192179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.192317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.192350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.192455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.192487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.192685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.192722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.192961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.192994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.193271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.193305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.193517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.193550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.193833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.193873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.194018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.194051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.194304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.194337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.194540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.194573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.194731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.194764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.194958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.194991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.195108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.195139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.195452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.195487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.195728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.195762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.196036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.196069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.196365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.196399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.196662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-10-15 13:07:25.196698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-10-15 13:07:25.196887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.196919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.197111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.197145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.197444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.197479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.197669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.197703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.197844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.197877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.198012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.198046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.198262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.198295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.198599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.198640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.198784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.198817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.199014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.199047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.199255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.199289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.199558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.199591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.199814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.199848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.200132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.200166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.200300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.200334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.200519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.200551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.200706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.200740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.200972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.201007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.201147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.201179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.201429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.201464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.201614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.201648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.201914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.201946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.202141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.202174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.202437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.202471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.202668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.202702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.202894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.202927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.203119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.203151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-10-15 13:07:25.203365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-10-15 13:07:25.203398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.203669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.203704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.203950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.203995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.204126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.204160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.204410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.204443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.204686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.204720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.204860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.204893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.205085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.205118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.205344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.205378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.205677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.205712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.205970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.206003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.206296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.206329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.206622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.206659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.206880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.206914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.207127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.207161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.207366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.207401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.207718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.207755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.207893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.207927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.208047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.208080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.208282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.208315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.208503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.208537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.208823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.208857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.209062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.209095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.209236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.209268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.209406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.209440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.209753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.209788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.210034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.210067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.210219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.210253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.210385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.210419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.210620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.210661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.210863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.210896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.211120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.211154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.211401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-10-15 13:07:25.211434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-10-15 13:07:25.211730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.211765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.211976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.212009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.212139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.212173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.212404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.212438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.212731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.212767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.212965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.212998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.213213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.213247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.213470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.213503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.213702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.213737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.213886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.213919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.214250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.214326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.214544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.214581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.214754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.214791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.214942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.214977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.215162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.215195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.215479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.215513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.215700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.215735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.215982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.216016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.216242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.216276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.216536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.216570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.216851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.216886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.217024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.217058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.217340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.217373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.217584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.217637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.217852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.217885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.218084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.218118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.218406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.218439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.218684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.218720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.218978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.219011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.219190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.219224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.219351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.219385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-10-15 13:07:25.219572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-10-15 13:07:25.219614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.219931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.219965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.220095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.220129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.220355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.220388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.220583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.220629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.220792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.220826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.220974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.221007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.221167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.221200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.221455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.221489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.221762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.221798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.222102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.222136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.222365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.222398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.222672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.222708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.222859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.222892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.223089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.223122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.223432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.223466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.223675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.223711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.223894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.223927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.224079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.224112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.224327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.224361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.224551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.224585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.224905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.224940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.225089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.225122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.225409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.225443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.225640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.225675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.225942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.225975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.226326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.226360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.226587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.226629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.226773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.226807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.226948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.226982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.227255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.227288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.227480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.227514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.227819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-10-15 13:07:25.227860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-10-15 13:07:25.228137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.228171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.228351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.228385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.228542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.228575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.228877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.228912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.229051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.229084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.229299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.229332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.229537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.229571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.229766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.229800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.229988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.230021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.230322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.230356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.230635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.230671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.230810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.230844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.231065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.231098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.231380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.231414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.231662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.231697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.231892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.231927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.232154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.232188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.232393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.232426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.232641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.232676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.232875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.232910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.233161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.233195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.233506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.233539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.233878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.233913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.234110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.234144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.234363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.234398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.234583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.234628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.234907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.234941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.235095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.235129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.235428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.235463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.235701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.235738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.236030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.236063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.236406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-10-15 13:07:25.236441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-10-15 13:07:25.236742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.236777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.236991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.237026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.237276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.237308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.237623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.237660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.237911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.237946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.238138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.238172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.238377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.238411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.238664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.238705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.238908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.238941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.239220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.239255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.239462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.239496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.239644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.239680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.239935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.239969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.240100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.240135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.240329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.240363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.240643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.240679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.240941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.240975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.241290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.241325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.241549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.241583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.241847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.241882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.242016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.242051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.242329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.242364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.242641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.242677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.242808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.242843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.242986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.243021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.243242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.243277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.243476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.243511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.243697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.243734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.243891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.243926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-10-15 13:07:25.244069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-10-15 13:07:25.244104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.244386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.244419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.244664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.244700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.244857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.244891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.245158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.245193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.245503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.245537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.245743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.245779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.245976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.246011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.246218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.246252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.246505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.246540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.246785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.246822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.246967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.247002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.247249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.247283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.247441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.247475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.247706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.247741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.248041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.248075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.248345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.248378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.248637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.248674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.248885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.248925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.249122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.249155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.249350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.249385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.249579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.249622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.249861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.249895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.250194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.250229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.250504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.250538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.250824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.250859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.251014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.251048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.251244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.251278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.251463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.251496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.251707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.251743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.251969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-10-15 13:07:25.252003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-10-15 13:07:25.252131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.252166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.252372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.252407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.252613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.252649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.252849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.252883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.253106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.253142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.253395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.253430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.253639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.253676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.253877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.253912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.254171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.254205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.254502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.254537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.254865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.254901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.255047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.255082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.255272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.255307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.255618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.255655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.255889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.255935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.256140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.256175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.256369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.256404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.256608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.256644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.256807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.256843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.257074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.257109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.257395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.257430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.257660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.257697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.257851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.257885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.258080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.258115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.258433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.258468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.258730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.258767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.258993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.259028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.259299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.259333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.259627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.259664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.259933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.259966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.260306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.260341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.260541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.260576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-10-15 13:07:25.260748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-10-15 13:07:25.260784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.260992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.261027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.261323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.261358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.261626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.261663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.261885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.261919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.262064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.262099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.262409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.262444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.262650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.262687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.262906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.262940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.263159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.263195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.263519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.263554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.263824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.263860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.264117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.264151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.264450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.264484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.264722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.264758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.264928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.264962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.265250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.265286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.265505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.265538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.265824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.265860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.266091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.266126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.266387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.266422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.266553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.266587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.266809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.266850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.267058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.267092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.267348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-10-15 13:07:25.267382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-10-15 13:07:25.267639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.267676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.267875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.267912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.268111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.268145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.268368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.268403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.268529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.268564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.268801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.268838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.269003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.269037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.270732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.270794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.271120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.271155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.271425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.271460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.271663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.271699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.271937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.271970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.272269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-10-15 13:07:25.272304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-10-15 13:07:25.272568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.272614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.272869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.272903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.273058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.273093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.273334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.273370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.273642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.273678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.273884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.273918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.274180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.274214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.274357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.274391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.274673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.274708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.274927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.274961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.275222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.275255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.275556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.275591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.275805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.275840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.276059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.276094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.276306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.276340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.276535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.276569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.276906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.276944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.277134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.277169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.277444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.277478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.277738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.277774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.277989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.278024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.278229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.278262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.278413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.278446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.278673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.278710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.278979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.279020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.279324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.279358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.279673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-10-15 13:07:25.279712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-10-15 13:07:25.279990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.280023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.280610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.280646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.280854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.280889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.281090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.281126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.281359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.281393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.281679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.281716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.281916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.281952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.282102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.282137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.282462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.282497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.282754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.282789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.283109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.283144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.283367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.283401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.283734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.283770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.283972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.284008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.284193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.284226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.284406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.284441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.284640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.284676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.284978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.285014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.285278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.285312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.285512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.285546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.285694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.285730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.285989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.286024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.286222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.286257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.286534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.286567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.286756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.286792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.286939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.286974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.287160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.287195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.287396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.287432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.287680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.287717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.287910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.287945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.288134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.288169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-10-15 13:07:25.288449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-10-15 13:07:25.288483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.288756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.288791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.288944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.288979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.289247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.289282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.289477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.289511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.289712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.289748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.289955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.289995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.290272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.290306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.290584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.290630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.290888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.290923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.291056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.291090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.291295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.291330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.291619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.291654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.291912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.291947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.292150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.292184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.292385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.292423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.292724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.292760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.293031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.293066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.293376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.293412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.293626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.293661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.293818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.293853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.294012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.294047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.294339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.294373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.294594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.294650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.294853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.294887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.295197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.295232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.295462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.295497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.295782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.295817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.296073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.296107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.296315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.296349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.296628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.296664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.297012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.297047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-10-15 13:07:25.297238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-10-15 13:07:25.297273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.297516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.297551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.297790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.297825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.297969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.298003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.298161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.298196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.298495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.298528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.298758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.298793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.298944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.298979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.299208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.299243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.299485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.299518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.299820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.299856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.300156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.300191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.300389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.300423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.300622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.300658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.300868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.300909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.301112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.301146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.301349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.301382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.301523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.301558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.301801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.301837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.301961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.301995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.302199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.302233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.302375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.302410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.302689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.302725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.302917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.302952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.303109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.303144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.303406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.303440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.303750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.303786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.303942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.303976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.304263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.304298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.304517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.304551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.304724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.304760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.305015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.305049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.305172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.305206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.305427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.305462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.305739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-10-15 13:07:25.305775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-10-15 13:07:25.305974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.306010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.306146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.306181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.306325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.306359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.306469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.306503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.306768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.306804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.307012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.307046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.307192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.307227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.307454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.307488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.307755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.307791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.307984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.308019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.308298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.308333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.308585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.308628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.308857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.308892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.309174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.309208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.309392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.309427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.309635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.309671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.309864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.309899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.310094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.310129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.310372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.310406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.310545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.310586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.310805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.310841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.310957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.310991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.311133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.311167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.311351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.311385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.311646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.311681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.311903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.311936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.312168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.312202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.312336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.312370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.312653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.312690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.312838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.312872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.313011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-10-15 13:07:25.313045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-10-15 13:07:25.313387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.313421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.313696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.313732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.313942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.313976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.314234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.314268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.314541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.314576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.314817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.314854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.315059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.315093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.315397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.315432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.315714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.315751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.315953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.315988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.316254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.316289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.316515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.316549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.316770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.316806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.316999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.317032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.317231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.317264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.317474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.317509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.317731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.317767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.318045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.318079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.318399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.318434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.318738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.318773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.318906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.318941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.319087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.319122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.319385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.319420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.319731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.319767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.319958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.319991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.320179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.320214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.320412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.320446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.320669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.320704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.320925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.320971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.321170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.321401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.321435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.321701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.321736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.321865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-10-15 13:07:25.321898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-10-15 13:07:25.322031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.322067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.322346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.322381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.322578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.322624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.322813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.322848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.323008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.323043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.323194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.323229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.323425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.323460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.323657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.323693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.323902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.323937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.324272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.324308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.324497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.324532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.324796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.324832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.325064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.325099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.325368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.325403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.325561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.325595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.325800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.325835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.326021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.326055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.326255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.326290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.326417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.326452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.326655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.326691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.326905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.326940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.327149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.327184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.327469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.327504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.327707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.327743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.327997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.328031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.328355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.328390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.328591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.328634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.328894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.328928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.329078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.329112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.329336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.329371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.329632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.329668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.329883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.329917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.330194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.330230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.330538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.330571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.330842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.330876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-10-15 13:07:25.331102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-10-15 13:07:25.331143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.331482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.331517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.331745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.331805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.332041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.332076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.332324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.332358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.332496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.332530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.332756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.332792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.333013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.333046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.333252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.333286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.333401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.333436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.333645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.333680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.333832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.333866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.334074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.334108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.334312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.334346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.334618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.334654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.334928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.334961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.335108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.335142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.335372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.335406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.335626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.335662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.335802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.335836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.336040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.336074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.336388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.336423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.336678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.336712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.336873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.336907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.337137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.337175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.337439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.337474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.337781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.337819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.337986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.338022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.338174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.338210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.338409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.338445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.338672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.338709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.338860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.338895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.339176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.339211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.339440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.339475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.339738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-10-15 13:07:25.339775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-10-15 13:07:25.339994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.340027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.340241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.340275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.340415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.340449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.340706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.340742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.340899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.340933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.341190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.341229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.341436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.341472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.341755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.341791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.341927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.341961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.342160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.342194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.342387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.342421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.342534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.342569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.342773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.342810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.343010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.343045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.343265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.343301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.343509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.343544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.343842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.343878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.344193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.344228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.344435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.344468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.344627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.344663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.344827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.344862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.345074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.345108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.345371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.345406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.345691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.345728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.345936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.345971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.346227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.346262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.346567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.346627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.346825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.346860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.347133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.347167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.347436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.347471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.347656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.347693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.347901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.347935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.348223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.348257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.348477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.348511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.348791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-10-15 13:07:25.348827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-10-15 13:07:25.349126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.349160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.349415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.349450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.349728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.349764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.349989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.350023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.350170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.350205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.350421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.350456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.350763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.350798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.351053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.351086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.351384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.351419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.351646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.351682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.351828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.351867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.352075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.352109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.352380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.352415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.352738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.352774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.353052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.353086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.353316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.353349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.353535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.353568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.353828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.353864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.354059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.354092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.354352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.354387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.354671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.354708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.354925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.354959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.355140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.355175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.355383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.355418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.355677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.355712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.355915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.355949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.356219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.356253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.356508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.356543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.356716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.356751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.356939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.356974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.357202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.357235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-10-15 13:07:25.357516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-10-15 13:07:25.357550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.357699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.357735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.357923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.357957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.358161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.358196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.358395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.358431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.358706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.358742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.359033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.359069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.359276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.359309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.359510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.359544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.359762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.359797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.359951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.359984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.360196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.360230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.360370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.360404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.360546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.360580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.360838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.360873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.361060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.361094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.361250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.361284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.361502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.361536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.361778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.361814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.362018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.362059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.362293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.362328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.362554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.362589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.362758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.362794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.362937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.362970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.363173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.363207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.363422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.363458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.363580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.363623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.363832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.363867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.364147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.364182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.364410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.364444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.364697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.364734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.364891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.364926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.365130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.365164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.365513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.365549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.365747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.365783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-10-15 13:07:25.366000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-10-15 13:07:25.366034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.366271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.366306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.366512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.366546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.366825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.366861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.367070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.367105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.367331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.367365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.367563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.367598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.367871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.367906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.368109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.368143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.368325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.368360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.368625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.368661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.368815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.368849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.369053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.369087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.369381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.369415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.369647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.369684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.369916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.369949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.370159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.370192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.370504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.370540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.370766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.370801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.371010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.371044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.371230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.371265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.371520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.371555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.371774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.371809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.372070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.372104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.372364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.372411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.372670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.372707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.372901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.372936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.373238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.373273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.373536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.373570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.373785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.373821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.373956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.373990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.374194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.374229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.374432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.374466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.374658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.374694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.374827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.374861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-10-15 13:07:25.375117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-10-15 13:07:25.375151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.375433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.375468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.375721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.375758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.375974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.376009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.376300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.376334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.376634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.376671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.376890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.376924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.377082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.377116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.377332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.377366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.377565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.377628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.377788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.377822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.377973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.378007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.378323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.378357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.378564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.378598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.378749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.378784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.378985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.379020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.379231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aebb0 is same with the state(6) to be set 00:27:05.276 [2024-10-15 13:07:25.379622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.379703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.380015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.380053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.380346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.380382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.380590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.380644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.380796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.380831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.381036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.381071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.381329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.381362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.381569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.381616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.381779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.381814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.381973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.382006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.382153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.382186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.382461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.382495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.382731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.382767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.382920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.382954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.383234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.383268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.383492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.383526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.383757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.383794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.383921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.383955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.384252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.384286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-10-15 13:07:25.384566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-10-15 13:07:25.384609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.384748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.384781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.384990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.385023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.385217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.385251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.385477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.385510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.385767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.385802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.386002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.386036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.386167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.386206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.386421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.386454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.386656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.386692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.386948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.386983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.387111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.387145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.387430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.387464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.387704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.387740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.388026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.388060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.388296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.388330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.388636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.388672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.388878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.388912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.389067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.389101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.389303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.389337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.389635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.389671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.389834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.389869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.390009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.390043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.390251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.390285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.390568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.390618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.390805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.390841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.391036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.391070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.391400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.391434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.391747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.391783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.391987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.392022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.392236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.392271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.392575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.392618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.392865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.392899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.393156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.393190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.393479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.393513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-10-15 13:07:25.393743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-10-15 13:07:25.393779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.394079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.394114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.394376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.394410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.394621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.394657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.394883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.394917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.395040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.395075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.395297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.395331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.395544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.395578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.395730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.395764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.395964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.395997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.396140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.396174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.396394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.396428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.396646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.396689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.396840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.396873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.397080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.397115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.397340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.397373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.397578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.397621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.397806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.397841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.397999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.398033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.398249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.398283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.398471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.398506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.398716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.398752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.398948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.398984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.399183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.399217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.399496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.399530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.399685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.399720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.399982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.400016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.400238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.400272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.400484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.400519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.400817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.400853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.401146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.401180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-10-15 13:07:25.401386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-10-15 13:07:25.401420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.401674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.401710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.401917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.401951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.402157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.402193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.402472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.402507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.402734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.402769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.402918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.402952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.403102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.403137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.403262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.403296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.403493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.403528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.403707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.403743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.403935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.403971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.404181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.404215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.404499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.404533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.404691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.404728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.404937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.404971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.405124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.405158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.405372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.405406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.405624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.405660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.405791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.405825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.406018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.406052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.406333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.406373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.406521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.406555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.406760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.406797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.406920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.406955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.407070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.407105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.407352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.407387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.407575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.407619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.407773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.407808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.408018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.408054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.408253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.408287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.408592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.408657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.408920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.408955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.409100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.409135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.409437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.409471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-10-15 13:07:25.409752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-10-15 13:07:25.409788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.410046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.410080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.410357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.410391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.410611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.410646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.410803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.410838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.411023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.411057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.411323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.411357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.411539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.411575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.411790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.411825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.411974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.412008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.412219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.412254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.412468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.412502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.412713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.412749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.412897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.412931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.413130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.413164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.413396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.413429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.413556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.413590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.413753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.413787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.413982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.414016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.414162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.414197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.414435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.414469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.414662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.414699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.414835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.414870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.414999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.415032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.415168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.415203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.415396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.415430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.415656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.415697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.415838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.415872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.416009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.416044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.416239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.416274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.416398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.416433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.416637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.416672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.416875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.416909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.417117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.417152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.417352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.417386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.417684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-10-15 13:07:25.417720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-10-15 13:07:25.417988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.418023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.418310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.418344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.418538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.418573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.418819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.418854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.419091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.419126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.419374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.419408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.419550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.419584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.419783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.419818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.419961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.419997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.420236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.420271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.420528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.420564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.420728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.420763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.421019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.421051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.421380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.421414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.421643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.421679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.421877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.421911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.422118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.422153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.422415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.422449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.422673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.422710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.422995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.423030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.423188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.423222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.423450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.423485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.423681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.423716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.423864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.423898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.424102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.424136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.424273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.424307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.424595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.424640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.424777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.424812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.425006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.425043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.425178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.425212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.425517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.425557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.425776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.425810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.426084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.426118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.426322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.426356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.426578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-10-15 13:07:25.426623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-10-15 13:07:25.426772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.426807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.427063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.427098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.427410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.427444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.427651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.427687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.427896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.427931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.428152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.428187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.428442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.428478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.428624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.428660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.428846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.428881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.429149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.429182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.429321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.429355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.429578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.429621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.429781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.429817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.429972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.430007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.430305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.430339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.430594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.430636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.430843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.430877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.431095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.431129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.431423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.431457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.431714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.431750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.431973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.432007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.432278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.432312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.432595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.432684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.432904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.432944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.433110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.433146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.433290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.433325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.433626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.433663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.433822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.433860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.434117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.434152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.434342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.434377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.434634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.434670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.434932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.434966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.435228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.435262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.435498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.435532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.435802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-10-15 13:07:25.435840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-10-15 13:07:25.436044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.436094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.436325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.436361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.436542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.436576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.436819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.436854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.437064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.437098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.437440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.437473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.437644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.437681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.437993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.438029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.438195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.438231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.438533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.438568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.438829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.438865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.439052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.439087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.439373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.439407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.439666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.439702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.439874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.439908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.440108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.440141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.440448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.440487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.440686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.440723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.440934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.440970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.441174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.441208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.441404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.441440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.441652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.441688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.441970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.442005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.442211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.442250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.442476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.442511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.442751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.442791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.442995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.443029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.443332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.443408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.443654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.443694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.443956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.443992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.444270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.283 [2024-10-15 13:07:25.444304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.283 qpair failed and we were unable to recover it. 00:27:05.283 [2024-10-15 13:07:25.444587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.444635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.444846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.444879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.445134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.445168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.445511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.445545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.445795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.445830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.446036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.446070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.446326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.446361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.446570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.446631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.446837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.446871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.447023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.447057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.447318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.447353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.447506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.447539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.447748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.447783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.447935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.447969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.448122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.448155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.448339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.448374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.448629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.448666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.448865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.448899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.449053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.449087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.449237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.449271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.449552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.449587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.449744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.449777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.449977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.450011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.450148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.450189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.450391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.450424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.450652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.450687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.450897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.450930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.451090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.451124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.451259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.451294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.451494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.451527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.451709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.451745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.451948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.451982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.452173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.452206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.452464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.452500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.452745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.452780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.284 [2024-10-15 13:07:25.452974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.284 [2024-10-15 13:07:25.453008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.284 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.453193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.453227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.453490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.453524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.453811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.453846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.454104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.454138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.454440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.454477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.454632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.454667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.454873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.454908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.455069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.455104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.455266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.455300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.455554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.455589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.455742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.455775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.455969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.456005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.456303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.456337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.456625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.456661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.456965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.457000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.457209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.457244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.457587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.457630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.457842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.457875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.458129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.458163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.458427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.458461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.458759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.458795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.458999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.459032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.459252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.459286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.459485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.459520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.459845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.459880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.460029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.460063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.460269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.460303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.460500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.460534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.460809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.460851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.461160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.461195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.461341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.461375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.461505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.461539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.461772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.461807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.285 [2024-10-15 13:07:25.461962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.285 [2024-10-15 13:07:25.461996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.285 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.462298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.462334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.462536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.462571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.462785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.462819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.462956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.462989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.463190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.463224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.463432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.463464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.463714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.463749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.463946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.463981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.464127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.464161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.464383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.464417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.464726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.464763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.464932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.464966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.465227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.465261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.465520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.465555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.465774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.465810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.465937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.465971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.466191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.466225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.466500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.466534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.466708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.466745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.466906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.466940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.467224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.467257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.467385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.467424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.467657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.467692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.467848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.467882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.468017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.468054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.468310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.468342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.468626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.468661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.468856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.468891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.469089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.469124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.469346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.469380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.469634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.469671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.469824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.469859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.286 [2024-10-15 13:07:25.470003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.286 [2024-10-15 13:07:25.470038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.286 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.470198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.470232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.470485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.470520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.470723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.470758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.470904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.470938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.471067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.471102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.471424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.471458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.471714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.471750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.471954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.471988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.472136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.472170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.472375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.472410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.472668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.472705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.473004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.473038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.473257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.473292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.473436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.473471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.473665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.473700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.473911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.473945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.474097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.474131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.474276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.474311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.474623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.474658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.474810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.474845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.474992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.475026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.475279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.475313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.475507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.475540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.475740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.475775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.475964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.475997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.476277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.476311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.476531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.476564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.476745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.476781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.477032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.477067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.477184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.477223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.477483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.477517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.477768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.477805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.477938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.477973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.478163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.478197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.478389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.478424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.287 qpair failed and we were unable to recover it. 00:27:05.287 [2024-10-15 13:07:25.478619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.287 [2024-10-15 13:07:25.478654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.478806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.478840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.479040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.479075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.479274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.479309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.479456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.479489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.479790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.479826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.480030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.480065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.480386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.480420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.480737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.480773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.480990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.481024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.481179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.481214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.481500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.481534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.481745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.481781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.481980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.482013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.482237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.482271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.482491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.482524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.482725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.482761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.482918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.482952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.483176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.483210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.483521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.483555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.483758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.483795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.483991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.484038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.484242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.484277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.484534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.484569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.484855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.484892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.485163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.485197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.485382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.485417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.485552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.485585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.485804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.485838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.485985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.486019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.288 qpair failed and we were unable to recover it. 00:27:05.288 [2024-10-15 13:07:25.486170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.288 [2024-10-15 13:07:25.486204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.486462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.486498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.486715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.486752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.486906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.486939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.487192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.487226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.487500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.487579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.487829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.487871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.488092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.488129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.488413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.488449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.488593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.488646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.488936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.488973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.489174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.489209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.489415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.489453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.489662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.489701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.489931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.489966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.490082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.490117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.490409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.490443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.490741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.490777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.490988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.491033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.491271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.491308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.491512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.491550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.491857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.491902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.492113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.492150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.492367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.492409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.492670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.492705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.492865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.492898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.493086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.493121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.493444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.493479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.493699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.493735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.493988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.494023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.494282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.494321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.494463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.494497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.494763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.494800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.494961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.494994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.495209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.495244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.289 [2024-10-15 13:07:25.495457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.289 [2024-10-15 13:07:25.495494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.289 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.495694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.495731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.495974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.496010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.496317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.496355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.496551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.496588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.496782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.496819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.497075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.497110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.497353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.497388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.497637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.497676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.497829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.497866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.498257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.498330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.498624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.498666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.498922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.498958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.499268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.499302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.499495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.499531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.499820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.499858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.500058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.500091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.500378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.500412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.500688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.500726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.500874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.500908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.501110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.501150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.501440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.501474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.501740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.501776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.501986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.502031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.502337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.502372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.502630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.502666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.502945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.502979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.503185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.503219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.503369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.503403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.503680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.503716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.503919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.503952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.504159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.504196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.504415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.504449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.504754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.504790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.504921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.504955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.290 [2024-10-15 13:07:25.505183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.290 [2024-10-15 13:07:25.505217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.290 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.505501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.505535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.505769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.505805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.506088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.506123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.506441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.506474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.506750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.506785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.506988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.507023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.507216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.507253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.507384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.507423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.507588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.507643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.507804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.507836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.508033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.508067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.508199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.508233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.508485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.508519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.508821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.508860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.509066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.509142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.509387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.509424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.509721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.509756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.509900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.509934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.510080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.510114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.510337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.510371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.510593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.510638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.510803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.510838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.510994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.511028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.511249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.511283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.511548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.511582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.511813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.511847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.512102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.512136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.512445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.512488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.512753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.512788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.512936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.512970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.513274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.513308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.513493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.513527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.513706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.513742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.513995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.514030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.291 qpair failed and we were unable to recover it. 00:27:05.291 [2024-10-15 13:07:25.514361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.291 [2024-10-15 13:07:25.514395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.514613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.514649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.514790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.514824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.515020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.515056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.515260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.515294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.515521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.515556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.515710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.515745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.515913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.515947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.516178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.516212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.516411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.516446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.516768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.516804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.517081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.517115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.517240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.517271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.517527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.517561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.517783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.517818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.518071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.518105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.518246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.518280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.518485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.518519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.518842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.518881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.519039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.519076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.519395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.519430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.519635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.519671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.519819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.519856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.520072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.520109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.520347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.520381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.520536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.520571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.520736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.520773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.521034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.521070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.521298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.521333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.521481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.521516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.521729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.521765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.521992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.522026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.522170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.522204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.522440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.522482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.522627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.522663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.292 qpair failed and we were unable to recover it. 00:27:05.292 [2024-10-15 13:07:25.522800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.292 [2024-10-15 13:07:25.522835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.523042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.523076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.523321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.523355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.523510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.523545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.523691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.523725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.523927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.523962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.524109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.524143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.524425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.524460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.524589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.524633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.524844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.524878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.525070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.525104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.525329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.525362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.525638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.525674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.525796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.525828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.525976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.526010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.526214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.526249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.526386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.526421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.526546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.526579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.526881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.526916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.527071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.527103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.527362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.527397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.527703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.527738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.527870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.527905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.528106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.528138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.528357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.528392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.528551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.528584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.528806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.528839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.529088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.529125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.529370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.529405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.529666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.529705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.529856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.293 [2024-10-15 13:07:25.529890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.293 qpair failed and we were unable to recover it. 00:27:05.293 [2024-10-15 13:07:25.530047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.530082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.530323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.530356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.530544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.530578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.530789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.530824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.530978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.531011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.531155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.531188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.531373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.531407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.531611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.531653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.531799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.531831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.532033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.532069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.532222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.532255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.532551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.532586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.532817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.532853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.533049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.533284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.533447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.533672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.533842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.533987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.534022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.534164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.534197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.534455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.534488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.534684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.534719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.534846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.534878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.535133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.535168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.535370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.535403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.535599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.535643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.535898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.535932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.536073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.536107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.536345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.536380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.536573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.536616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.536761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.536795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.536986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.537019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.537154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.537187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.537464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.537499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.537695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.294 [2024-10-15 13:07:25.537737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.294 qpair failed and we were unable to recover it. 00:27:05.294 [2024-10-15 13:07:25.537876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.537908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.538102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.538137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.538358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.538393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.538583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.538623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.538773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.538806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.539014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.539049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.539409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.539444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.539733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.539768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.539968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.540003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.540207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.540241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.540506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.540540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.540750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.540784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.541061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.541095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.541328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.541361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.541641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.541676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.541828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.541862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.542072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.542107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.542252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.542286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.542575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.542618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.542842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.542875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.543024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.543058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.543245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.543276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.543413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.543446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.543642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.543680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.543900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.543933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.544119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.544153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.544288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.544321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.544582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.544626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.544832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.544867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.545076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.545110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.545390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.545424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.545682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.545718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.545863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.545896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.546049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.546082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.546317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.546353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.295 qpair failed and we were unable to recover it. 00:27:05.295 [2024-10-15 13:07:25.546617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.295 [2024-10-15 13:07:25.546652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.546803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.546836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.546989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.547029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.547236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.547272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.547489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.547530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.547740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.547776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.547912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.547945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.548152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.548185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.548454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.548488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.548711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.548747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.548956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.548990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.549176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.549208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.549464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.549498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.549722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.549757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.549957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.549990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.550248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.550283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.550493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.550526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.550718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.550753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.550956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.550989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.551217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.551251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.551475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.551508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.551722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.551758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.551971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.552005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.552165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.552199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.552477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.552511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.552772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.552808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.553015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.553049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.553180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.553214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.553400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.553434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.553732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.553767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.553924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.553959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.554087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.554120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.554321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.554355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.554561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.554596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.554814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.554849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-10-15 13:07:25.555000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-10-15 13:07:25.555032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-10-15 13:07:25.555330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-10-15 13:07:25.555364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-10-15 13:07:25.555639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-10-15 13:07:25.555675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-10-15 13:07:25.555831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-10-15 13:07:25.555865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-10-15 13:07:25.556063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-10-15 13:07:25.556096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.556391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.556426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.556689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.556727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.556932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.556967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.557102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.557136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.557467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.557507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.557782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.557817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.558004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.558038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.558175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.558209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.558485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.558520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.558782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.558818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.558975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.559009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.559291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.559325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.559581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.559624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.559852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.559886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.560093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.560127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.560325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.560359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.560560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.560594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.560813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.560847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-10-15 13:07:25.561004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-10-15 13:07:25.561039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.561304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.561338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.561548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.561583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.561812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.561847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.562077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.562112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.562332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.562366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.562559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.562592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.562779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.562894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.562927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.563082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.563116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.563399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.563445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.563703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.563740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.563976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.564010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.564210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.564245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.564437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.564472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.564727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.564763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.564945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.564979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.565134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.565168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.565394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.565429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.565683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.565719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.565863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.565896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.566175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.566209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.566515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.566549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.566840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.566876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.567072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.567105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.567351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.567387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.567642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.567684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.567962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.567996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.568279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.568313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.568594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.568642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.568773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.568806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.569080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.569114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.569389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.569424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.569686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-10-15 13:07:25.569721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-10-15 13:07:25.569880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.569915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.570104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.570138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.570419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.570453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.570739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.570774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.571049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.571084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.571389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.571423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.571626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.571662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.571965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.572000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.572269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.572304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.572445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.572480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.572684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.572719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.572945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.572980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.573185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.573220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.573471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.573504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.573745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.573781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.574035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.574069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.574373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.574408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.574654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.574689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.574891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.574925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.575232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.575266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.575527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.575561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.575700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.575736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.575990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.576024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.576218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.576253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.576435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.576470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.576729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.576765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.577019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.577054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.577334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.577368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.577561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.577596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.577812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.577847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.578121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.578155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.578441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.578475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.578752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.578794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.579072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-10-15 13:07:25.579107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-10-15 13:07:25.579357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.579392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.579670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.579705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.579985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.580019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.580325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.580359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.580621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.580656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.580939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.580974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.581176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.581211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.581396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.581431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.581659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.581696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.581893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.581927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.582111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.582145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.582397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.582431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.582733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.582769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.583032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.583066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.583356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.583391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.583619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.583653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.583937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.583972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.584254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.584289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.584515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.584549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.584827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.584863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.585142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.585176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.585463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.585499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.585701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.585736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.585964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.585999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.586300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.586335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.586536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.586569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.586870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.586904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.587061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.587095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.587299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.587334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.587633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.587668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.587933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.587968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.588153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.588188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.588393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.588428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-10-15 13:07:25.588699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-10-15 13:07:25.588735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.588988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.589022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.589314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.589350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.589553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.589586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.589820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.589854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.590113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.590154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.590443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.590477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.590698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.590734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.590926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.590961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.591160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.591196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.591375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.591409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.591689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.591725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.591913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.591947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.592143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.592177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.592395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.592429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.592570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.592616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.592873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.592908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.593110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.593145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.593352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.593387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.593585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.593646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.593780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.593815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.594096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.594129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.594323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.594358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.594658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.594694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.594983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.595017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.595311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.595345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.595621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.595657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.595846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.595881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.596089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.596123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.596255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.596289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.596557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.596590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.596820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.596855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.597085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.597120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.597303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.597337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-10-15 13:07:25.597538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-10-15 13:07:25.597572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.597851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.597887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.598108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.598144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.598371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.598405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.598683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.598719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.599002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.599037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.599261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.599296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.599573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.599616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.599893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.599927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.600183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.600217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.600438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.600471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.600665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.600707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.600981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.601013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.601296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.601330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.601564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.601597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.601915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.601949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.602146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.602181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.602436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.602470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.602747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.602784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.603073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.603107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.603236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.603270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.603522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.603556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.603770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.603805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.603996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.604030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.604229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.604263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.604545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.604579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.604882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.604916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.605177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.605212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.605486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.605521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.605800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-10-15 13:07:25.605836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-10-15 13:07:25.606046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.606080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.606334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.606367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.606664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.606700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.606965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.606999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.607251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.607287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.607584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.607628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.607908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.607942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.608215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.608249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.608540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.608575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.608836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.608871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.609175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.609209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.609487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.609522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.609747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.609783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.610062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.610096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.610376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.610411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.610550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.610585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.610879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.610915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.611195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.611228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.611479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.611513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.611774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.611810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.612064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.612099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.612400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.612440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.612728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.612763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.613046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.613081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.613351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.613385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.613581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.613625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.613908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.613943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.614083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.614117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.614261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.614296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.614511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.614548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.614710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.614746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.614878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.614912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-10-15 13:07:25.615193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-10-15 13:07:25.615228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.615383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.615418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.615671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.615709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.615912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.615947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.616156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.616191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.616444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.616479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.616783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.616819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.617081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.617116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.617273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.617307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.617495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.617531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.617807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.617843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.618032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.618067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.618216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.618251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.618551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.618586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.618878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.618914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.619046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.619081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.619213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.619248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.619456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.619491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.619685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.619722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.619977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.620011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.620302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.620337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.620617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.620653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.620849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.620883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.621163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.621198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.621388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.621423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.621579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.621631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.621875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.621910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.622101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.622135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.622263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.622297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.622563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.622614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.622808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.622842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.623122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.623156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.623403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.623436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.623654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.623690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.623955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-10-15 13:07:25.623990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-10-15 13:07:25.624273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.624307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.624529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.624564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.624817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.624853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.625083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.625116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.625323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.625357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.625614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.625649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.625918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.625952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.626207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.626241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.626465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.626499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.626727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.626762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.627014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.627048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.627300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.627333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.627533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.627569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.627778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.627813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.628020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.628054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.628332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.628366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.628625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.628660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.628963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.628997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.629254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.629288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.629568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.629626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.629889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.629923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.630202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.630237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.630518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.630553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.630720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.630756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.630951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.630985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.631265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.631299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.631581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.631626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.631883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.631917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.632214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.632247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.632533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.632568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.632843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.632878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.633094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.633129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.633269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.633302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.633507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-10-15 13:07:25.633543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-10-15 13:07:25.633858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.633901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.634136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.634170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.634472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.634506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.634709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.634746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.635029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.635064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.635363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.635397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.635659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.635695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.635988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.636022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.636292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.636327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.636519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.636554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.636696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.636732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.636918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.636952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.637214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.637248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.637505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.637540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.637849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.637884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.638185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.638220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.638483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.638517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.638738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.638775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.639028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.639063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.639367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.639401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.639589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.639633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.639929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.639964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.640186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.640221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.640424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.640458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.640684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.640719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.640996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.641030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.641314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.641348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.641490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.641524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.641775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.641811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.642091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.642126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.642387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.642421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.642657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.642692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.642914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.642948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.643142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-10-15 13:07:25.643177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-10-15 13:07:25.643390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.643424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.643704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.643740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.644016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.644050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.644325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.644359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.644488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.644522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.644844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.644879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.645067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.645108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.645264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.645297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.645586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.645645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.645876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.645911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.646164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.646199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.646396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.646430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.646709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.646745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.647071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.647106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.647324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.647358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.647559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.647594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.647788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.647823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.648005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.648039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.648259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.648294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.648495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.648530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.648836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.648871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.649054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.649089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.649292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.649327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.649541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.649575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.649787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.649821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.650041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.650076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.650327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.650362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.650502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.650536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.650814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.650850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-10-15 13:07:25.651110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-10-15 13:07:25.651145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.651446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.651480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.651747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.651783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.652078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.652113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.652321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.652356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.652568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.652609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.652809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.652843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.653097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.653130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.653408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.653443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.653697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.653732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.654006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.654040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.654301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.654336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.654496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.654530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.654782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.654818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.655041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.655076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.655354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.655387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.655591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.655657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.655912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.655953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.656247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.656281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.656561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.656596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.656734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.656767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.657047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.657081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.657207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.657240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.657441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.657475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.657750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.657785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.658043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.658078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.658338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.658372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.658626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.658661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.658913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.658947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.659207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.659241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.659512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.659546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.659837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.659873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.660021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.660055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.660356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.660391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-10-15 13:07:25.660595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-10-15 13:07:25.660639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.660909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.660942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.661166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.661201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.661315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.661350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.661572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.661627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.661835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.661875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.662150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.662183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.662389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.662423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.662678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.662715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.662928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.662963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.663162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.663196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.663469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.663509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.663646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.663682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.663886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.663922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.664117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.664152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.664426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.664460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.664646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.664682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.664826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.664860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.665117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.665151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.665355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.665390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.665523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.665558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.665858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.665895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.666088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.666122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.666247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.666288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.666429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.666462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.666742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.666778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.667081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.667116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.667342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.667377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.667520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.667555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.667841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.667876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.668070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.668104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.668406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.668441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.668573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.668619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-10-15 13:07:25.668902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-10-15 13:07:25.668936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.669123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.669158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.669361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.669395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.669650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.669685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.669949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.669982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.670234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.670268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.670521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.670556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.670713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.670749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.671004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.671038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.671223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.671257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.671539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.671573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.671780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.671815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.672096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.672131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.672433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.672468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.672685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.672721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.672930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.672964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.673158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.673192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.673470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.673509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.673817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.673851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.674128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.674162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.674363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.674398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.674532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.674565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.674868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.674904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.675101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.675136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.675411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.675445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.675589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.675632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.675913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.675947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.676061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.676095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.676350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.676384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.676663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.676699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.676981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.677015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.677293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.677327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.677621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.677657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.677909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.677945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.678073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.678108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-10-15 13:07:25.678390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-10-15 13:07:25.678424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.678720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.678755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.679019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.679053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.679241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.679275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.679477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.679512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.679714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.679749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.680010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.680045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.680191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.680224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.680478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.680512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.680703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.680739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.680933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.680968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.681187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.681222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.681448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.681483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.681767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.681803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.682080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.682115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.682398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.682433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.682650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.682686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.682881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.682915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.683218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.683252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.683444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.683479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.683765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.683801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.683957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.683992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.684219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.684258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.684449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.684482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.684689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.684724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.684931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.684965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.685146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.685181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.685382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.685417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.685623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.685659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.685965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.686000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.686278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.686313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.686515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.686549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.686836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.686872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.687123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-10-15 13:07:25.687158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-10-15 13:07:25.687354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.687389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.687674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.687710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.687976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.688011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.688273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.688308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.688523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.688558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.688845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.688881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.689083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.689117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.689319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.689353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.689629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.689665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.689973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.690008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.690264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.690299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.690598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.690644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.690779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.690813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.691093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.691127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.691337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.691373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.691630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.691664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.691914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.691949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.692251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.692287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.692546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.692580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.692870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.692906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.693182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.693217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.693430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.693464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.693677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.693713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.693896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.693930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.694114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.694149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.694294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.694548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.694583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.694889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.694925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.695176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.695215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-10-15 13:07:25.695443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-10-15 13:07:25.695477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.695744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.695780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.696047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.696081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.696351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.696385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.696512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.696547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.696747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.696782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.697067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.697100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.697365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.697400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.697625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.697661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.697853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.697887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.698140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.698174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.698427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.698461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.698711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.698748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.698966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.699000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.699254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.699288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.699542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.699577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.699796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.699833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.700094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.700128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.700332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.700367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.700622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.700658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.700839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.700873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.701060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.701094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.701317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.701351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.701611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.701646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.701853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.701888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.702148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.702182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.702372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.702407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.702618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.702655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.702922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.702957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.703161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.703196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.703456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.703489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.703767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.703803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.704094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-10-15 13:07:25.704129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-10-15 13:07:25.704397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.704431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.704636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.704673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.704926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.704960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.705248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.705283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.705474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.705507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.705790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.705826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.705976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.706015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.706309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.706343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.706630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.706666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.706886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.706920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.707191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.707226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.707514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.707548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.707857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.707893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.708077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.708111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.708383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.708417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.708694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.708731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.709015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.709048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.709332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.709366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.709646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.709682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.709908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.709941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.710202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.710236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.710370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.710403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.710681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.710716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.710915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.710949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.711204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.711239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.711493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.711528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.711793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.711829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.712026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.712060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.712255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.712290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.712492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.712526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.712713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.712750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.712954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.712988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-10-15 13:07:25.713191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-10-15 13:07:25.713227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.713508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.713542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.713847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.713883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.714142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.714177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.714399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.714433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.714685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.714721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.714975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.715009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.715279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.715313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.715596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.715640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.715912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.715947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.716062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.716096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.716305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.716340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.716493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.716528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.716718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.716755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.717032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.717076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.717278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.717311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.717530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.717564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.717783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.717820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.718045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.718079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.718332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.718366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.718665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.718701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.718984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.719018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.719133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.719167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.719441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.719476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.719671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.719707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.719903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.719938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.720219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.720254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.720482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.720516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.720705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.720739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-10-15 13:07:25.720946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-10-15 13:07:25.720981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.721162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.721196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.721512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.721546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.721704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.721741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.721940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.721974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.722231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.722265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.722564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.722609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.722858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.722892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.723174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.723209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.723437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.723472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.723660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.723697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.723970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.724004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.724209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.724244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.724430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.724464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.724737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.724772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.725045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.725079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.725346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.725380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.725596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.725641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.725911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.725946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.726142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.726175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.726379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.726414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.726692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.726727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.726849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.726884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.727162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.727196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.727379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.727413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.727687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.727729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.727995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.728029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.728257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.728290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.728400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.728435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.728709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.728747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.728934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.728969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.729235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.729268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.729468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-10-15 13:07:25.729503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-10-15 13:07:25.729765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.729800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.729929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.729964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.730182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.730217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.730494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.730529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.730815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.730850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.731074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.731108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.731301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.731336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.731548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.731582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.731855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.731889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.732179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.732214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.732429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.732464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.732691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.732727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.732932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.732967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.733153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.733187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.733465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.733499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.733731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.733767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.733925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.733960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.734242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.734277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.734420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.734455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.734712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.734748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.734952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.734986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.735205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.735239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.735432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.735467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.735744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.735779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.735975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.736010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.736264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.736299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.736501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.736536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.736811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.736847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.737126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.737160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.737444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-10-15 13:07:25.737479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-10-15 13:07:25.737684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.737720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.737906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.737939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.738088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.738128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.738263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.738295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.738573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.738632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.738900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.738934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.739206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.739240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.739425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.739459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.739747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.739783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.740064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.740097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.740377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.740411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.740621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.740657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.740863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.740897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.741201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.741236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.741442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.741477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.741682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.741718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.741855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.741889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.742167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.742201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.742476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.742511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.742701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.742736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.742991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.743026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.743305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.743339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.743622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.743658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.743856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.743891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.744168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.744202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-10-15 13:07:25.744484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-10-15 13:07:25.744519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.744705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.744740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.744949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.744983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.745130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.745163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.745423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.745456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.745705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.745741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.746001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.746035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.746218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.746253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.746471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.746505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.746706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.746739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.746941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.746974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.747255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.747290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.747542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.747577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.747884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.747922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.748053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.748086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.748212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.748245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.748515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.748548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.748841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.748883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.749153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.749186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.749386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.749420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.749697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.749732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.750033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.750068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.750261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.750295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.750425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.750459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.750663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.750698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.750833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.750867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.751094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.751126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.751339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.751372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.751608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.751642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.751937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.751973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.752244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.752279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.752472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.752505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.752784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.752819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.753003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.753038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.753261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.753296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.753490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.753524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.753721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.753757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.754038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.754073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.754351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.754385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.754590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.754644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.754839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.754874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.755091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.755124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.755327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.755361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.755561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.755596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.755857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.755892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.756037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.756070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.756259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.756292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.756567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.756612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.756892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.756927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.757218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.757253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.757529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.757564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.757762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.757797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.757994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.758028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.758238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.758272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.758549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.758584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.758783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.758817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.759055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.759088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.759268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.759308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.759590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.759636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-10-15 13:07:25.759771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-10-15 13:07:25.759805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.760058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.760093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.760346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.760381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.760626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.760663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.760965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.761000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.761287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.761321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.761596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.761640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.761849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.761883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.762069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.762103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.762357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.762391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.762675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.762711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.763014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.763048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.763235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.763271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.763531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.763566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.763786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.763825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.764050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.764085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.764285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.764320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.764456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.764491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.764683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.764720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.764996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.765030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.765212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.765247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.765463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.765498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.765683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.765718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.765909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.765942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.766123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.766156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.766416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.766452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.766750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.766787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.767049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.767083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.767307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.767342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.767454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.767489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.767676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.767712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.767992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.768026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.768289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.768324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.768634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.768670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.768946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.768980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.769263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.769298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.769557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.769592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.769834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.769868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.770098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.770144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.770410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.770445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.770736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.770777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.771068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.771102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.771362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.771397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.771624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.771660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.771932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.771966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.772161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.772196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.772378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.772413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.772625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.772660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.772863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.772898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.773150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.773185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.773410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.773445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.773658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.773695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.773842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.773876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.774147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.774181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.774326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.774360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.774556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.774591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.774804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.774838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.775016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.775051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.775328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.775364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.775562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.775596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.775864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.775900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-10-15 13:07:25.776094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-10-15 13:07:25.776128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.776406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.776440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.776647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.776683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.776986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.777021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.777287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.777367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.777673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.777714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.777909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.777944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.778201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.778235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.778492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.778527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.778790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.778826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.779082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.779115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.779394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.779428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.779657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.779693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.779880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.779914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.780174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.780208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.780464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.780498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.780698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.780733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.780932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.780975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.781233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.781268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.781469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.781502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.781788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.781823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.782053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.782088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.782341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.782374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.782656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.782693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.782936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.782969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.783223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.783257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.783439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.783474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.783759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.783795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.784010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.784044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.784240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.784274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.784549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.784583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.784901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.784935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.785130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.785164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.785440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.785474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.785679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.785715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.786016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.786051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.786311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.786344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.786568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.786614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.786916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.786951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.787223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.787258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.787522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.787555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.787698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.787734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.787931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.787964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.788261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.788297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.788563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.788597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.788894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.788930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.789152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.789186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.789488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.789522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.789726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.789762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.789968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.790002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.790290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.790324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.790599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.790642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.790923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.790958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.791232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.791266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.791552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-10-15 13:07:25.791587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-10-15 13:07:25.791870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.791905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.792186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.792220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.792426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.792466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.792745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.792781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.793038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.793071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.793284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.793319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.793577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.793622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.793754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.793789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.794010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.794045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.794234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.794268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.794462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.794496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.794778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.794814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.795109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.795144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.795413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.795447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.795665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.795701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.795900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.795933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.796149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.796183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.796465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.796499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.796649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.796685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.796872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.796906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.797182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.797216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.797435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.797469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.797730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.797765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.798011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.798046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.798227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.798261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.798565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.798598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.798887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.798922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.799127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.799161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.799359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.799393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.799657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.799693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.799900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.799934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.800193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.800229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.800534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.800570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.800835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.800869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.801168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.801202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.801478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.801514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.801641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.801677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.801968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.802003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.802205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.802240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.802544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.802579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.802800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.802836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.803093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.803128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.803395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.803435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.803719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.803755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.804023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.804058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.804347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.804380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.804622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.804658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.804961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.804996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.805207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.805242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.805433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.805467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.805750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.805787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.806064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.806098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.806317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.806352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.806655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.806691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.806952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-10-15 13:07:25.806986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-10-15 13:07:25.807289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.807324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.807621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.807657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.807876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.807909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.808162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.808197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.808395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.808428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.808621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.808655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.808935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.808970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.809204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.809238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.809514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.809548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.809862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.809898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.810113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.810148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.810403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.810437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.810691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.810726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.810908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.810942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.811252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.811287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.811573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.811619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.811833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.811867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.812149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.812183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.812404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.812438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.812569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.812612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.812890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.812924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.813163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.813197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.813472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.813506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.813699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.813736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.814017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.814050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.814319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.814353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.814572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.814630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.814911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.814947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.815189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.815224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.815491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.815524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.815817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.815854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-10-15 13:07:25.816131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-10-15 13:07:25.816164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.816447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.816482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.816764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.816800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.817083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.817118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.817394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.817428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.817718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.817754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.818027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.818060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.818279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.818313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.818584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.818628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.818915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.818950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.819255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.819289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.819545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.819578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.819779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.819813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.820083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.820117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.820383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.820418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.820716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.820752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.821016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.821049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.821344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.821379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.821652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.821687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.821972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.822006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.822282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.822316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.822633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.822669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.822973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.823009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.823268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.823308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.823564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.823599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.823871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.823906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.824094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.824128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.824359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.824393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.824624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.824661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.824943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.824977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.825164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.825198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.825461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-10-15 13:07:25.825496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-10-15 13:07:25.825724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.825760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.825946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.825980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.826234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.826269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.826523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.826558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.826849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.826885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.827183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.827218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.827378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.827412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.827596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.827640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.827904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.827939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.828141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.828176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.828381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.828415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.828695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.828731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.828984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.829019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.829235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.829269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.829469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.829503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.829765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.829802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.830026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.830061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.830340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.830375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.830595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.830656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.830917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.830951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.831229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.831264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.831418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.831452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.831706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.831742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.831938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.831972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.832256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-10-15 13:07:25.832289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-10-15 13:07:25.832482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.832517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.832703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.832739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.832966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.833000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.833208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.833243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.833466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.833500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.833730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.833766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.834042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.834083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.834361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.834395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.834587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.834631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.834816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.834851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.835127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.835162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.835432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.835466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.835737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.835772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.835977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.836011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.836198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.836232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.836429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.836463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.836667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.836703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.836899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.836933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.837118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.837152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.837373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.837407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.837668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.837703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.837978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.838012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.838289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.838324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.838633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.838670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.838948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.838983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.839211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.839244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.839424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.839459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.839663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.839698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.839894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.839928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.840141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.840176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.840455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.840489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.840771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.840807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.841066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.841102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.841379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.841413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.841682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-10-15 13:07:25.841717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-10-15 13:07:25.841996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.842030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.842271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.842306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.842575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.842619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.842828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.842862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.843137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.843171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.843454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.843489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.843718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.843754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.843893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.843926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.844199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.844233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.844435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.844469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.844772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.844808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.845025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.845066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.845270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.845305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.845583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.845628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.845903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.845938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.846195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.846230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.846529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.846563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.846733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.846769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.846975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.847010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.847293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.847327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.847528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.847562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.847765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.847801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.848104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.848139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.848398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.848433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.848655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.848690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.849001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.849036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.849225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.849258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.849478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.849512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.849789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.849825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.850031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.850066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.850187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.850221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-10-15 13:07:25.850473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-10-15 13:07:25.850507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.850726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.850762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.851041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.851075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.851378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.851413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.851596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.851641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.851957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.851996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.852181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.852215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.852505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.852541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.852810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.852845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.853105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.853140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.853326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.853361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.853644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.853678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.853958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.853992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.854273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.854308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.854591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.854652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.854933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.854968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.855219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.855254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.855520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.855553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.855854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.855889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.856148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.856183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.856482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.856523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.856653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.856688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.856811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.856844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.857067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.857100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.857305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.857338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.857546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.857580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.857791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.857826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.858080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.858114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.858417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.858452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.858644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.858681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.858869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.858902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.859180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.859214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.859397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.859431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.859686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.859723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.859947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.859983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.860194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.860228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.860340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.860374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.860581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.860624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.860910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.860945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.861214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.861250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-10-15 13:07:25.861362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-10-15 13:07:25.861396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.861594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.861639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.861791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.861825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.862100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.862134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.862374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.862408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.862684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.862721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.863007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.863041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.863185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.863219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.863473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.863508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.863705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.863741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.864016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.864050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.864353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.864386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.864537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.864571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.864834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.864870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.865158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.865192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.865510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.865544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.865835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.865872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.866143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.866177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.866491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.866525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.866745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.866782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.866985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.867026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.867280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.867314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.867623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.867658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.867914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.867948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.868172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.868206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.868412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.868447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.868637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.868673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.868976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.869009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.869254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.869289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.869427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.869462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.869731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.869767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.870071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.870104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.870350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.870384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.870637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.870673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.870968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-10-15 13:07:25.871002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-10-15 13:07:25.871300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.871334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.871611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.871646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.871913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.871947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.872226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.872261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.872563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.872597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.872807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.872841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.873106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.873139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.873347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.873382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.873656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.873692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.873914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.873949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.874142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.874176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.874439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.874473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.874784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.874820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.875073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.875108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.875325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.875358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.875548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.875583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.875894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.875928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.876138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.876172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.876355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.876389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.876707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.876743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.876927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.876961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.877214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.877248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.877549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.877583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.877882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.877917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.878172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.878207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.878509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.878549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-10-15 13:07:25.878850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-10-15 13:07:25.878885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.879020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.879055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.879260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.879296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.879592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.879640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.879917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.879951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.880100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.880135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.880437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.880473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.880670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.880709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.880998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.881038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.881170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.881204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.881353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.881386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.881684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.881720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.881929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.881966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.882253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.882295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.882519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.882553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.882776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.882812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.883102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.883138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.883288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.883327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.883530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.883567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.883834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.883872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.884014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.884049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.884235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.884270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.884512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.884546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.884712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.884748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.884892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.884928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.885160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.885197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.885512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.885547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.885795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.885833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.886044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.886078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-10-15 13:07:25.886419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-10-15 13:07:25.886454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.886738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.886776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.886942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.886978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.887181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.887215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.887403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.887441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.887697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.887733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.887985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.888022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.888164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.888201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.888472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.888508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.888792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.888827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.889013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.889054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.889361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.889396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.889593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.889639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.889919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.889957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.890226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.890261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.890497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.890532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.890810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.890845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.891099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.891135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.891410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.891448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.891732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.891767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.892040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.892075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.892289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.892324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.892609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.892645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.892878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.892914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.893123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.893158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.893346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.893381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.893656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.893691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.893957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.893991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.894202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.894238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.894515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.894548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.894835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.894871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.895149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.895185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.895467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.895502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.895782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.895818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.896116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.896152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.896429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-10-15 13:07:25.896465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-10-15 13:07:25.896660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.896696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.896953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.896988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.897266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.897301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.897582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.897626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.897841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.897876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.898159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.898200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.898474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.898511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.898739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.898777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.899009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.899049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.899191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.899229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.899483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.899520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.899818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.899854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.900103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.900145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.900306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.900338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.900532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.900576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.900847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.900888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.901174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.901212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.901418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.901461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.901721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.901756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.901939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.901973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.902235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.902270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.902403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.902437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.902739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.902776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.903055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.903092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.903317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.903353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.903576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.903633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.903847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.903884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.904167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.904202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.904417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.904453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.904710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.904750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.905006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.905048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.905342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.905382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.905647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.905685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.905955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.905989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-10-15 13:07:25.906236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-10-15 13:07:25.906270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.906571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.906615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.906825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.906859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.907124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.907162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.907472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.907505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.907778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.907817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.907970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.908007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.908258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.908337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.908711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.908790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.909069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.909108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.909366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.909401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.909626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.909663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.909807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.909841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.910069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.910102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.910308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.910342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.910626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.910662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.910887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.910921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.911206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.911240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.911450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.911483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.911789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.911825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.912101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.912145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.912367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.912401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.912687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.912725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.912874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.912910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.913201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.913235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.913381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.913415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.913618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.913654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.913836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.913869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.914096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.914131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.914385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.914419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.914621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.914658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.914879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.914913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.915123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.915157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.915409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.915444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.915670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.915705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.915973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-10-15 13:07:25.916007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-10-15 13:07:25.916302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.916337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.916613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.916648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.916936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.916971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.917174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.917209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.917462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.917497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.917772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.917808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.917952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.917987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.918198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.918232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.918433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.918467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.918656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.918693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.918950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.918983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.919263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.919297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.919609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.919645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.919847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.919881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.920080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.920114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.920396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.920430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.920710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.920746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.921030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.921063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.921346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.921381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.921658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.921694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.921977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.922011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.922206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.922239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.922426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.922460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.922587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.922631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.922886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.922927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.923134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.923167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.923394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.923429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.923624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.923660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.923855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.923890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.924073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.924107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.924327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.924360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.924548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.924581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.924846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.924880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.925138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.925172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-10-15 13:07:25.925352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-10-15 13:07:25.925386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.925669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.925705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.925985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.926019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.926325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.926359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.926628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.926664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.926916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.926950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.927141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.927175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.927465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.927500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.927773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.927808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.928008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.928041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.928301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.928335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.928591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.928635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.928841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.928876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.929102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.929136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.929441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.929474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.929729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.929765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.930046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.930081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.930281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.930359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.930682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.930724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.931041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.931077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.931295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.931328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.931558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.931592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.931818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.931854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.932114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.932148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.932450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.932485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.932627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.932664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.932943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.932976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.933230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.933265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.933569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.933612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.933868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.933903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-10-15 13:07:25.934126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-10-15 13:07:25.934161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.934493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.934528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.934748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.934783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.934983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.935017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.935219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.935253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.935518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.935553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.935843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.935879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.936070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.936103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.936362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.936396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.936682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.936718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.936996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.937031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.937275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.937309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.937620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.937656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.937855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.937888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.938167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.938208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.938414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.938449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.938728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.938764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.938969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.939004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.939282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.939317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.939597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.939794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.939829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.940113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.940146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.940453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.940488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.940686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.940723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.941007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.941040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.941287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.941321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.941525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.941560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.941781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.941816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.942029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.942063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.942253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.942289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.942543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.942577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.942789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-10-15 13:07:25.942824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-10-15 13:07:25.943090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.943124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.943331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.943365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.943639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.943674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.943941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.943976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.944188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.944222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.944498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.944532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.944815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.944851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.945131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.945164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.945449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.945482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.945693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.945730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.946021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.946057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.946328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.946363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.946650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.946687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.946964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.946998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.947284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.947317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.947444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.947478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.947688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.947724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.947914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.947949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.948153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.948188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.948373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.948407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.948630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.948667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.948894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.948928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.949184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.949218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.949525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.949560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.949753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.949788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.950046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.950079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.950358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.950392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.950584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.950624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.950814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.950849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.951080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.951114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.951321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.951356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.951656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.951692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.951887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.951921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.952036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.952069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.952268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-10-15 13:07:25.952302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-10-15 13:07:25.952555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.952590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.952812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.952847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.953107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.953141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.953358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.953391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.953643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.953680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.953961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.953996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.954250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.954284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.954539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.954574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.954792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.954828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.955082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.955115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.955319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.955359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.955650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.955685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.955884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.955920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.956107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.956142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.956426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.956460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.956722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.956763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.957053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.957087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.957358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.957393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.957685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.957720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.957991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.958025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.958319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.958353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.958628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.958663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.958952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.958988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.959265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.959299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.959558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.959591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.959843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.959878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.960131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.960166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.960468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.960502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.960769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.960805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.961097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.961132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.961404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.961438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.961643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.961679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.961934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.961968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.962244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.962279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.962549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.962583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.962877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-10-15 13:07:25.962911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-10-15 13:07:25.963180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.963214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.963433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.963468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.963711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.963746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.963954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.963988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.964183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.964216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.964423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.964458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.964674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.964710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.964905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.964940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.965192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.965226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.965480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.965514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.965816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.965852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.966094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.966128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.966396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.966429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.966687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.966722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.966911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.966945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.967230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.967266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.967387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.967418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.967647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.967682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.967973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.968006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.968282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.968317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.968618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.968661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.968916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.968951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.969244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.969278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.969577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.969621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.969898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.969932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.970213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.970247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.970507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.970542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.970840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.970875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.971167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.971203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.971475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.971508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.971731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.971766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.971889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.971923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.972217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.972250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.972445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.972478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.972683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.972719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-10-15 13:07:25.972972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-10-15 13:07:25.973006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.973189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.973223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.973477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.973511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.973768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.973803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.974106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.974140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.974418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.974452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.974648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.974684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.974942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.974975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.975171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.975206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.975479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.975514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.975797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.975833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.976108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.976141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.976271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.976312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.976593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.976637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.976910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.976943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.977140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.977175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.977439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.977474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.977756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.977792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.978011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.978045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.978326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.978360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.978594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.978640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.978924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.978959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.979167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.979200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.979424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.979459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.979723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.979762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.979996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.980031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.980398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.980477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.980784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.980825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.981112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.981151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.981411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.981445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.981668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-10-15 13:07:25.981703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-10-15 13:07:25.981901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.981934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.982191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.982225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.982441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.982475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.982695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.982731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.982957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.982991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.983207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.983241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.983436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.983470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.983724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.983760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.983979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.984024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.984223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.984256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.984509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.984544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.984856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.984892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.985097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.985131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.985413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.985447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.985707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.985743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.985955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.985990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.986251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.986285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.986539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.986574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.986878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.986913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.987174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.987208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.987421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.987455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.987717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.987753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.987958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.987991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.988193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.988228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.988481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.988514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.988788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.988824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.989107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.989140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.989376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.989411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.989679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.989715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.989973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.990006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.990142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.990176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.990376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.990410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.990687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.990724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.990930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.990964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.991173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.991207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-10-15 13:07:25.991519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-10-15 13:07:25.991597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.991833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.991870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.992177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.992213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.992495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.992530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.992751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.992789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.993098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.993138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.993407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.993443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.993584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.993635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.993898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.993935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.994214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.994250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.994527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.994561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.994707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.994743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.994949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.994991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.995223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.995271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.995472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.995507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.995793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.995831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.996032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.996067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.996320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.996355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.996623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.996660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.996979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.997019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.997297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.997331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.997615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.997652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.997935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.997969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.998241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.998279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.998409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.998446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.998635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.998671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.998871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.998904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.999188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.999222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.999522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.999563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:25.999883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:25.999921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.000084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.000120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.000395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.000428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.000568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.000613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.000817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.000851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.001057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.001092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-10-15 13:07:26.001289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-10-15 13:07:26.001323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.001597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.001642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.001942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.001977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.002258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.002292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.002571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.002614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.002889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.002936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.003217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.003251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.003523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.003558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.003794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.003829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.004135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.004170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.004441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.004474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.004757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.004794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.004999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.005032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.005334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.005369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.005635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.005671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.005960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.005995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.006274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.006306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.006589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.006632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.006856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.006890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.007104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.007139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.007370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.007404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.007627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.007663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.007916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.007951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.008251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.008289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.008445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.008480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.008765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.008800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.009079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.009114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.009396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.009432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.009653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.009689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.009892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.009927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.010072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.010107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.010397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.010432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.010635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.010671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.010963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.010997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.011191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.011225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.011502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.011537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-10-15 13:07:26.011766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-10-15 13:07:26.011801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.011929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.011964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.012116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.012150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.012343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.012377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.012639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.012676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.012952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.012987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.013265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.013299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.013425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.013460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.013727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.013763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.014044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.014078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.014358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.014393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.014610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.014646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.014836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.014871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.015071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.015107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.015394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.015428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.015659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.015695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.015862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.015897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.016035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.016070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.016324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.016358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.016569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.016618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.016810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.016845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.017033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.017068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.017267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.017301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.017494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.017530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.017811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.017848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.018002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.018037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.018218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.018252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.018459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.018494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.018750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.018786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.019068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.019102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.019379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.019413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.019550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.019584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.019870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.019905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.020104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.020139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.020327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.020362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.020622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.020658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.020871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-10-15 13:07:26.020905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-10-15 13:07:26.021131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.021172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.021469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.021504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.021706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.021742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.021941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.021975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.022253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.022289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.022589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.022631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.022835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.022870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.023149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.023185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.023439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.023473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.023673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.023709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.023905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.023941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.024145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.024180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.024367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.024402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.024643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.024678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.024989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.025025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.025293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.025329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.025514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.025549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.025754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.025790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.025980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.026015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.026273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.026309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.026588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.026634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.026923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.026960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.027165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.027203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.027409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.027441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.027736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.027772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.027938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.027972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.028276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.028311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.028527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.028561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.028861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.028897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.029184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.029218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-10-15 13:07:26.029513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-10-15 13:07:26.029548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.029773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.029809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.030007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.030044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.030329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.030364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.030652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.030690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.030984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.031021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.031212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.031248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.031390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.031424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.032854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.032917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.033155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.033190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.033461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.033497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.033726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.033763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.033950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.033986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.034269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.034304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.034578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.034624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.034853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.034887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.035090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.035124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.035327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.035363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.035572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.035617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.035746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.035782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.035991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.036026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.036227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.036265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.036488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.036523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.036810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.036847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.037121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.037155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.037381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.037417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.037641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.037678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.037937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.037972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.038187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.038222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.038405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.038440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.038565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.038608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.038898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.038934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.039244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.039279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.039485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.039519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.039723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.039760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.040017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.040053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-10-15 13:07:26.040310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-10-15 13:07:26.040345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.040558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.040594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.040859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.040902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.041111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.041146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.041443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.041478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.041685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.041722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.041950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.041986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.042178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.042214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.042509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.042545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.042817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.042852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.043105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.043141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.043351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.043387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.043697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.043733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.043862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.043898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.044111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.044146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.044289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.044325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.044516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.044553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.044854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.044891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.045091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.045126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.045257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.045292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.045496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.045532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.045758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.045794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.046079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.046114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.046347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.046382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.046658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.046695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.046960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.046995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.047193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.047228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.047506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.047541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.047765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.047800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.047942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.047978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.048254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.048289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.048513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.048549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.048793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.048828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.048978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.049014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.049168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.049203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.049396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.049431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-10-15 13:07:26.049629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-10-15 13:07:26.049666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.049921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.049956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.050214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.050249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.050509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.050545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.050791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.050828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.051132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.051167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.051408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.051443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.051706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.051749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.051985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.052021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.052223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.052259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.052439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.052475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.052724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.052761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.053064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.053098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.053409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.053445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.053732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.053768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.053916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.053952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.054172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.054207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.054417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.054452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.054730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.054766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.055058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.055093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.055329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.055365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.055576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.055622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.055832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.055867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.056023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.056058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.056297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.056331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.056585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.056631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.056887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.056922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.057179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.057214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.057402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.057436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.057656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.057692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.057968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.058003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.058218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.058253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.058378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.058412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.058688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.058724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.058921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.058963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.059125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-10-15 13:07:26.059161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-10-15 13:07:26.059430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.059465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.059652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.059688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.059975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.060013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.060214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.060250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.060381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.060416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.060666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.060702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.060990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.061167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.061354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.061518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.061713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.061870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.061905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.062147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.062227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.062457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.062497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.062763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.062799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.063001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.063037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.063225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.063259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.063443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.063478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.063666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.063703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.063966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.064133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.064369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.064555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.064726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.064960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.064995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.065190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.065235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.065375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.065410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.065640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.065677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.065820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.065854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.065992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.066027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.066224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.066258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.066394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.066429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.066730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.066765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-10-15 13:07:26.066886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-10-15 13:07:26.066921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.067112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.067147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.067267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.067301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.067488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.067523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.067665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.067702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.067889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.067924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.068912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.068946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.069068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.069103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.069228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.069263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.069378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.069412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.069597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.069643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.069801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.069836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.070093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.070129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.070275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.070310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.070527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.070561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.070728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.070764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.070900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.070934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.071123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.071158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.071390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.071429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.071585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.071631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.071819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.071854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.072053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.072087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.072273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.072308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.072525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.072561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.072748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.072783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.072945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.072981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.073171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.073206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.073488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.073532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.073724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-10-15 13:07:26.073760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-10-15 13:07:26.073956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.073992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.074143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.074179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.074321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.074355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.074482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.074517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.074798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.074834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.075040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.075076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.075335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.075372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.075567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.075619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.075809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.075844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.076040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.076076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.076260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.076294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.076495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.076532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.076732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.076769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.077026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.077061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.077262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.077298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.077496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.077530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.077747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.077783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.077977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.078140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.078289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.078462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.078700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.078862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.078896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.079101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.079136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.079350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.079384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.079535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.079569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.079740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.079775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.079910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.079944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.080090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.080125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.080278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.080312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.080515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.080549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.080735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.080771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.080994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.081029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.081173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.081207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.081345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.081379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-10-15 13:07:26.081613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-10-15 13:07:26.081649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.081785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.081818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.082002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.082036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.082182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.082222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.082363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.082397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.082592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.082637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.082911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.082944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.083067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.083100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.083296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.083330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.083544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.083578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.083866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.083900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.084051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.084085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.084196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.084230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.084466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.084500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.084693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.084729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.084913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.084946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.085068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.085102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.085255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.085290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.085482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.085516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.085708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.085744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.085866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.085901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.086158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.086192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.086376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.086410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.086668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.086704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.086856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.086890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.087158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.087193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.087464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.087497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.087678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.087714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.087900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.087934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.088130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.088164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.088398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.088434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.088626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.088661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.088952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.088987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.089189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.089223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.089451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.089487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-10-15 13:07:26.089672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-10-15 13:07:26.089707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.089922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.089957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.090144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.090178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.090389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.090423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.090676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.090712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.090915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.090949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.091154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.091188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.091375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.091410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.091667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.091709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.091990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.092024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.092226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.092259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.092453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.092487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.092670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.092706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.092842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.092876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.093057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.093092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.093208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.093243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.093381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.093424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.093613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.093649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.093828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.093863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.094043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.094076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.094217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.094250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.094521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.094554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.094833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.094868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.095073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.095106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.095327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.095361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.095573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.095618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.095887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.095922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.096037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.096070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.096248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.096283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.096537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.096570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.096733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.096768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.097031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.097065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.097262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.097296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.097444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.097477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.097726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.097761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.097973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-10-15 13:07:26.098008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-10-15 13:07:26.098144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.098178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.098368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.098402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.098578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.098621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.098883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.098917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.099098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.099131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.099327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.099361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.099499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.099532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.099659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.099694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.099914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.099949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.100071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.100106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.100288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.100321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.100455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.100490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.100622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.100663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.100803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.100836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.101040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.101074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.101202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.101237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.101427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.101459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.101692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.101729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.101953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.101987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.102164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.102198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.102469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.102504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.102630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.102665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.102781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.102815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.103015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.103049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.103244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.103278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.103396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.103430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.103574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.103614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.103743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.103777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.104046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.104080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.104229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.104263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.104403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-10-15 13:07:26.104436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-10-15 13:07:26.104553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.104586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.104838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.104873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.105836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.105869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.106193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.106269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.106487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.106525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.106711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.106747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.107050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.107276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.107503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.107666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.107880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.107988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.108020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.108242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.108278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.108396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.108429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.108549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.108582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.108722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.108757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.109009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.109052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.109327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.109360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.109492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.109525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.109638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.109673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.109798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.109830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.110013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.110047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.110251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.110284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.110533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.110566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.110711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.110744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.110854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.111011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.111044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.111221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.111255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.111500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.111533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.111657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-10-15 13:07:26.111691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-10-15 13:07:26.111820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.111853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.112126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.112159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.112337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.112370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.112497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.112530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.112775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.112809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.113033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.113067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.113258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.113291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.113472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.113505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.113672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.113705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.113905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.113940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.114212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.114245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.114524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.114558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.114708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.114744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.114870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.114908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.115086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.115119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.115314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.115346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.115593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.115637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.115833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.115867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.116057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.116089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.116224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.116257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.116469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.116503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.116630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.116664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.116956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.116991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.117111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.117146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.117389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.117422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.117539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.117572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.117811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.117845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.118034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.118067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.118261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.118293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.118435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.118468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.118647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.118681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.118815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.118847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.119036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.119069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.119263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.119296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.119416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.119450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-10-15 13:07:26.119566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-10-15 13:07:26.119598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.119726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.119758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.120024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.120057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.120197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.120230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.120410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.120443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.120574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.120615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.120805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.120838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.121012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.121045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.121263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.121297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.121423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.121455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.121568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.121610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.121826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.121860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.122107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.122140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.122410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.122443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.122639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.122674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.122920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.122953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.123201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.123235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.123369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.123402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.123616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.123658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.123777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.123810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.124002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.124034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.124308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.124342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.124533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.124565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.124807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.124841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.124947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.124979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.125222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.125256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.125466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.125499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.125706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.125740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.125981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.126014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.126228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.126262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.126443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.126476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.126662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.126699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.126915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.126947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.127217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.127251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.127433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.127466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.127662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-10-15 13:07:26.127696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-10-15 13:07:26.127937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.127970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.128156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.128186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.128459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.128491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.128681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.128712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.128905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.128937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.129197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.129229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.129361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.129394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.129589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.129633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.129763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.129797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.129988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.130021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.130266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.130299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.130473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.130506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.130770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.130804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.131010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.131043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.131167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.131200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.131422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.131455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.131564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.131598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.131788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.131821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.132011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.132043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.132161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.132193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.132398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.132431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.132694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.132730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.132923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.132961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.133147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.133180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.133423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.133456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.133758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.133792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.133967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.133999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.134220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.134254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.134496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.134528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.134654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.134689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.134821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.134853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.134973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.135006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.135206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.135239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.135438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.135472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.135660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.135692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-10-15 13:07:26.135836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-10-15 13:07:26.135868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.136056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.136089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.136283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.136316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.136529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.136563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.136822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.136856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.137171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.137205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.137448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.137481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.137678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.137712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.137819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.137850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.138119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.138153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.138374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.138406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.138677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.138711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.138973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.139006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.139146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.139179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.139366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.139399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.139573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.139613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.139888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.139922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.140103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.140136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.140387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.140420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.140672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.140707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.140893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.140926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.141115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.141148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.141355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.141388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.141571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.141610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.141802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.141835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.141943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.141977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.142248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.142283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.142417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.142455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.142713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.142747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.143034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.143067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.143259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.143293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.143492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-10-15 13:07:26.143525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-10-15 13:07:26.143696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.143730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.143992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.144025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.144161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.144192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.144473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.144505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.144713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.144746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.144988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.145020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.145203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.145234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.145485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.145517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.145699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.145731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.145877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.145911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.146089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.146122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.146247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.146279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.146469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.146501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.146768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.146803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.146934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.146967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.147139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.147172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.147351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.147383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.147559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.147590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.147718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.147751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.148932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.148965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.149085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.149117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.149360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.149394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.149575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.149628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.149755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.149788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.149952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.149986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-10-15 13:07:26.150952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-10-15 13:07:26.150989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.151231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.151262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.151555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.151588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.151860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.151894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.152094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.152127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.152312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.152345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.152649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.152685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.152873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.152906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.153090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.153123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.153368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.153402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.153591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.153635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.153934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.153967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.154206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.154240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.154367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.154400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.154527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.154558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.154689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.154724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.154905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.154937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.155123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.155155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.155346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.155379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.155570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.155612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.155789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.155822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.156070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.156103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.156325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.156358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.156477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.156510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.156646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.156680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.156820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.156853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.157139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.157173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.157384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.157417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.157553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.157586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.157871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.157906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.158035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.158068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.158278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.158312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.158502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.158535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.158642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.158675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.158887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.158921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.159039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-10-15 13:07:26.159072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-10-15 13:07:26.159202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.159235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.159422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.159456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.159636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.159670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.159845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.159878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.160060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.160099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.160236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.160268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.160483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.160517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.160659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.160695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.160893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.160926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.161054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.161086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.161277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.161309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.161502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.161534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.161806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.161842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.162964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.162997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.163169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.163200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.163327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.163359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.163483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.163516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.163745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.163778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.164959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.164992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.165259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.165291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.165408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.165441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.165562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.165595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.165889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.165922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.166160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.166193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.166453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.166486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-10-15 13:07:26.166728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-10-15 13:07:26.166763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.166951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.166985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.167177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.167210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.167405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.167438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.167697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.167731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.167851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.167885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.168074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.168106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.168242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.168275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.168537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.168570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.168801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.168882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.169115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.169151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.169330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.169372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.169586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.169634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.169877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.169909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.170127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.170160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.170278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.170310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.170545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.170578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.170772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.170806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.171013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.171046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.171287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.171320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.171443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.171475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.171682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.171717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.171919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.171951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.172095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.172129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.172326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.172359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.172549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.172582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.172830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.172863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.173106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.173139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.173319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.173353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.173483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.173515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.173786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.173819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.174022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.174054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.174247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.174279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.174472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.174505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.174709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.174743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.174932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-10-15 13:07:26.174964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-10-15 13:07:26.175157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.175189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.175403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.175436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.175624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.175657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.175855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.175887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.176060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.176092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.176206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.176241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.176385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.176415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.176583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.176628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.176739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.176771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.177054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.177088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.177272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.177306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.177515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.177548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.177742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.177776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.178011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.178044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.178176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.178210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.178341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.178374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.178632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.178666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.178798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.178830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.179073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.179230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.179531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.179677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.179844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.179973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.180183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.180351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.180580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.180738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.180910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.180943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.181193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.181225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.181489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.181522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.181738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.181770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.181888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.181920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-10-15 13:07:26.182119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-10-15 13:07:26.182153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.182421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.182454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.182578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.182620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.182760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.182794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.182898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.182931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.183172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.183205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.183389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.183421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.183619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.183654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.183842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.183880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.184070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.184103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.184346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.184379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.184587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.184637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.184938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.184971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.185077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.185109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.185322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.185356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.185558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.185590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.185737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.185771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.185961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.185994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.186211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.186244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.186378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.186411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.186688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.186722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.186827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.186860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.187079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.187113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.187253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.187286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.187462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.187495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.187624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.187668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.187871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.187903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.188059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.188269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.188485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.188680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.188840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.188969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.189109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.189314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.189613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.189792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.189943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.189975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.190167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.190200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-10-15 13:07:26.190486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-10-15 13:07:26.190520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.190656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.190691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.190817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.190849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.191036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.191068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.191244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.191277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.191401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.191434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.191656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.191689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.191797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.191830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.192011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.192043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.192228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.192262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.192465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.192504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-10-15 13:07:26.192795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-10-15 13:07:26.192830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.193069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.193101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.193306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.193338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.193513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.193546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.193770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.193803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.193923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.193956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.194127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.194160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.194401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.194434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.194641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.194676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.194784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.194818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.194999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.195032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.195213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.195246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.195422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.195455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.195650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.195684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.195867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.195901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.196082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.196116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.196221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-10-15 13:07:26.196254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-10-15 13:07:26.196370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.196403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.196550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.196583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.196710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.196743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.196856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.196889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.197079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.197111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.197234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.197267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.197493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.197526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.197658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.197692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.197882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.197916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.198095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.198134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.198423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.198456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.198649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.198684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.198987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.199021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.199292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.199325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.199446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.199480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.199613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.199647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.199820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.199853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.200033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.200066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.200263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.200297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.200428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.200462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.200572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.200613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.200880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.200914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.201083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.201117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.201318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.201355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.201571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.201615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.201736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.201769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.201951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.201984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.202165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.202198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.202314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.202347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.202618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.202653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.202778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.202811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-10-15 13:07:26.203005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-10-15 13:07:26.203038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.203160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.203193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.203305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.203339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.203524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.203557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.203765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.203798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.203997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.204037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.204303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.204337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.204512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.204544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.204690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.204724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.204969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.205002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.205189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.205222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.205426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.205459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.205574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.205618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.205801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.205833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.206045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.206078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.206192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.206224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.206412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.206446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.206653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.206689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.206935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.206969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.207084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.207116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.207320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.207352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.207471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.207504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.207759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.207793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.208032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.208064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.208308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.208342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.208527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.208559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.208750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.208784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.209061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.209094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.209312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.209345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.209533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.209566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.209756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.209789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.209905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.209936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-10-15 13:07:26.210161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-10-15 13:07:26.210232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.210363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.210400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.210525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.210558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.210742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.210776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.210953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.210986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.211223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.211257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.211440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.211473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.211665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.211699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.211902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.211935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.212055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.212088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.212202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.212234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.212368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.212401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.212586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.212626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.212812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.212852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.213091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.213123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.213320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.213352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.213465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.213499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.213676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.213711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.213835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.213867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.214005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.214037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.214324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.214357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.214546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.214579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.214799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.214834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.214960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.214994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.215114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.215147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.215415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.215448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.215630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.215664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.215864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.215896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.216036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.216069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.216261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.216292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.216552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.216586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.216770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.216804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-10-15 13:07:26.216911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-10-15 13:07:26.216943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.217132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.217164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.217369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.217402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.217572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.217614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.217737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.217771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.217969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.218112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.218335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.218498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.218767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.218924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.218956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.219067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.219099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.219206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.219239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.219416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.219447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.219652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.219685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.219878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.219910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.220013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.220047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.220223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.220256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.220443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.220475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.220650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.220685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.220795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.220828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.221007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.221047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.221236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.221268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.221528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.221560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.221676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.221708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.221924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.221958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.222070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.222103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.222275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.222307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.222476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.222508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.222766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.222801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.223043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.223075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.223202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-10-15 13:07:26.223235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-10-15 13:07:26.223415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.223448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.223641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.223675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.223853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.223884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.224131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.224164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.224342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.224374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.224634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.224669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.224855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.224890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.225148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.225180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.225370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.225403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.225619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.225654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.225829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.225861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.225975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.226006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.226270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.226303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.226483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.226515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.226637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.226672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.226787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.226819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.227074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.227148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.227411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.227448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.227651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.227688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.227929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.227963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.228142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.228175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.228361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.228394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.228661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.228695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.228900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.228933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.229205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.229237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.229452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.229485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-10-15 13:07:26.229590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-10-15 13:07:26.229639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.229821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.229853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.230043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.230075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.230207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.230239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.230370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.230403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.230575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.230618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.230807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.230840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.231040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.231072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.231283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.231316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.231422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.231455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.231569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.231613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.231809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.231842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.232032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.232065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.232244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.232276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.232476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.232509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.232691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.232726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.232970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.233002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.233253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.233293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.233508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.233540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.233691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.233725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.233925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.233958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.234132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.234164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.234359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.234392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.234501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.234534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.234710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.234743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.234924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.234956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.235162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.235194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.235326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-10-15 13:07:26.235358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-10-15 13:07:26.235530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.235563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.235708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.235742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.235868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.235900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.236119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.236153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.236416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.236449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.236637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.236671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.236855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.236887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.237027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.237059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.237178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.237210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.237397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.237430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.237618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.237652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.237775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.237807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.238065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.238098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.238335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.238367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.238548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.238582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.238791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.238825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.239009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.239041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.239175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.239208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.239383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.239416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.239592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.239634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.239767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.239800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.240007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.240040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.240301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.240334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.240571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.240614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.240804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.240838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.241023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.241055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.241186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.241219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.241414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.241447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.241632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.241667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.241806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.241839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-10-15 13:07:26.242066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-10-15 13:07:26.242139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.242410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.242448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.242575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.242624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.242872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.242906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.243104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.243137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.243336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.243370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.243561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.243594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.243869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.243903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.244117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.244149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.244278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.244311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.244578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.244624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.244835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.244868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.245058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.245091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.245305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.245348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.245474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.245507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.245631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.245666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.245847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.245879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.246071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.246104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.246294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.246328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.246530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.246562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.246750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.246785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.247024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.247057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.247172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.247205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.247399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.247433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.247622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.247656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.247853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.247886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.248062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.248095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.248275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.248309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.248442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.248475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.248668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.248702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.248892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-10-15 13:07:26.248925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-10-15 13:07:26.249192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.249225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.249407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.249441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.249545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.249583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.249732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.249763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.249948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.249980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.250170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.250203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.250394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.250427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.250556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.250589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.250781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.250814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.251037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.251070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.251319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.251351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.251538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.251571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.251877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.251911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.252081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.252114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.252311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.252345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.252621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.252656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.252761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.252794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.253006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.253038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.253279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.253312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.253555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.253587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.253787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.253821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.253953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.253986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.254244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.254277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.254500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.254533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.254705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.254741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.254863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.254898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.255037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.255071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.255260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.255293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.255534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.255568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.255822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.255856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.255982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.256015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.256152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-10-15 13:07:26.256185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-10-15 13:07:26.256363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.256396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.256566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.256608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.256790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.256824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.257008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.257041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.257169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.257202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.257383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.257416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.257705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.257740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.257928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.257960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.258078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.258111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.258303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.258336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.258639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.258673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.258854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.258888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.259154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.259187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.259361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.259394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.259638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.259673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.259793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.259826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.260071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.260104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.260342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.260382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.260563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.260596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.260869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.260903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.261166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.261199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.261388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.261420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.261556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.261589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.261717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.261750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.261924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.261957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.262125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.262158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.262401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.262435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.262644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.262679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.262817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.262851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.263034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.263067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.263280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.263314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.263456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-10-15 13:07:26.263488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-10-15 13:07:26.263756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.263791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.263948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.263983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.264244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.264277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.264519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.264553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.264839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.264873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.265047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.265080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.265207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.265240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.265498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.265532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.265799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.265834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.266012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.266045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.266167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.266199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.266368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.266401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.266669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.266703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.266900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.266933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.267124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.267157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.267280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.267313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.267512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.267545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.267762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.267796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.267919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.267953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.268123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.268154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.268269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.268302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.268546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.268579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.268805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.268838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.268950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.268982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.269106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.269138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.269266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.269303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.269480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.269513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-10-15 13:07:26.269699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-10-15 13:07:26.269734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.269860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.269893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.270082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.270115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.270354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.270387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.270506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.270539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.270788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.270823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.270930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.270961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.271082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.271114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.271376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.271409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.271542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.271575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.271711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.271744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.271931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.271963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.272159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.272190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.272373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.272406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.272523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.272556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.272752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.272785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.272897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.272930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.273145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.273177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.273369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.273402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.273671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.273706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.273891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.273924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.274138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.274171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.274359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.274391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.274574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.274615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.274896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.274930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.275115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.275147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.275381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.275416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.275613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.275648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.275890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.275923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.276106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.276138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.276312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.276344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.276541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.276575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-10-15 13:07:26.276755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.220 [2024-10-15 13:07:26.276788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.277053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.277085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.277268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.277301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.277534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.277567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.277712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.277745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.277932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.277964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.278093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.278130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.278239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.278271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.278415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.278447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.278575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.278616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.278730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.278765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.279003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.279036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.279217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.279250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.279422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.279454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.279625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.279658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.279828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.279861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.280123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.280156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.280396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.280430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.280544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.280577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.280732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.280765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.281033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.281066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.281241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.281273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.281455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.281488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.281701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.281735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.281916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.281948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.282131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.282164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.282419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.282452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.282698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.282731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.282932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.282966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.283174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.283207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.283386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.283419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.283614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.283648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-10-15 13:07:26.283887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.221 [2024-10-15 13:07:26.283920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.284059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.284091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.284277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.284310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.284510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.284543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.284736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.284771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.284943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.284975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.285149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.285181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.285312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.285344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.285541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.285574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.285857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.285890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.286072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.286104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.286223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.286256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.286494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.286528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.286794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.286829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.286938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.286976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.287227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.287259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.287441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.287473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.287686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.287720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.287894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.287926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.288076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.288226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.288372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.288615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.288818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.288991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.289024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.289221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.289253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.289452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.289484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.289731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.289766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.289959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.289992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.290181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.290215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.290338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.290371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.290563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.222 [2024-10-15 13:07:26.290595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.222 qpair failed and we were unable to recover it. 00:27:06.222 [2024-10-15 13:07:26.290751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.290784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.290968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.291001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.291126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.291159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.291332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.291363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.291618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.291653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.291845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.291878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.292130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.292161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.292414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.292447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.292627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.292660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.292880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.292912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.293095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.293127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.293260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.293293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.293530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.293564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.293824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.293859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.293983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.294187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.294398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.294614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.294752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.294904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.294935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.295127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.295161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.295335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.295368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.295634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.295675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.295918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.295951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.296144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.296178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.296444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.296477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.296693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.296728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.296911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.296944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.297117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.297150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.297412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.297445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.297628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.297663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.297876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.297909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.223 qpair failed and we were unable to recover it. 00:27:06.223 [2024-10-15 13:07:26.298148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.223 [2024-10-15 13:07:26.298180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.298354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.298386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.298636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.298671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.298866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.298899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.299090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.299122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.299302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.299334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.299611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.299644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.299915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.299948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.300121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.300153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.300295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.300328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.300513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.300546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.300733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.300768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.300872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.300905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.301008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.301040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.301169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.301201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.301444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.301477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.301632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.301665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.301934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.301965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.302222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.302255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.302376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.302409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.302612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.302646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.302848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.302881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.303097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.303130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.303316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.303349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.303486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.303519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.303783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.303818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.304028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.304060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.304331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.304365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.304556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.304589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.304807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.304841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.224 [2024-10-15 13:07:26.305025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.224 [2024-10-15 13:07:26.305062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.224 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.305246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.305278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.305532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.305565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.305714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.305747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.305941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.305975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.306214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.306248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.306420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.306453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.306644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.306680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.306875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.306909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.307083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.307115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.307306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.307338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.307460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.307492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.307691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.307726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.307900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.307933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.308136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.308169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.308430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.308463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.308726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.308760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.308940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.308973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.309105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.309137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.309279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.309310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.309497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.309529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.309665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.309699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.309875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.309907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.310151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.310184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.310299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.310332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.310444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.310476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.310716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.310751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.310938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.310970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.311089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.311121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.311301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.311334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.225 [2024-10-15 13:07:26.311597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.225 [2024-10-15 13:07:26.311641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.225 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.311831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.311864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.311994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.312026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.312183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.312216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.312473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.312507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.312692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.312727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.312907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.312941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.313074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.313107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.313342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.313375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.313551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.313585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.313768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.313808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.313933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.313964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.314233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.314266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.314454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.314487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.314728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.314762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.314958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.314989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.315202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.315234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.315414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.315445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.315619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.315653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.315918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.315951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.316133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.316167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.316286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.316318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.316565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.316597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.316866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.316900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.317095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.317127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.317258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.317289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.317555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.317588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.317790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.317823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.318002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.318035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.318279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.318311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.318500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.318533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.318714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.318749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.226 [2024-10-15 13:07:26.318867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.226 [2024-10-15 13:07:26.318899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.226 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.319143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.319176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.319347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.319379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.319513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.319546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.319741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.319776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.319950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.319982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.320220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.320254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.320361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.320394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.320612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.320646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.320774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.320807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.320938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.320970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.321148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.321181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.321356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.321389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.321655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.321689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.321861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.321894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.322144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.322177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.322300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.322332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.322543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.322576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.322842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.322881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.323061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.323093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.323229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.323261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.323388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.323421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.323616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.323650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.323891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.323925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.324100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.324133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.324382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.324414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.324609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.324644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.324934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.324967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.325086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.325119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.325362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.325395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.325520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.325552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.325753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.325787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.227 [2024-10-15 13:07:26.325910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.227 [2024-10-15 13:07:26.325943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.227 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.326056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.326088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.326221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.326254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.326376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.326408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.326648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.326682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.326858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.326890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.327018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.327052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.327172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.327205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.327441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.327475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.327598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.327641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.327838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.327872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.328135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.328167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.328413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.328446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.328650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.328684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.328873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.328906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.329096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.329128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.329365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.329399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.329583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.329623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.329797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.329830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.330086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.330119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.330376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.330409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.330524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.330557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.330763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.330795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.330971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.331004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.331118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.331152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.331402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.331435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.331562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.331609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.331795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.331828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.332070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.332102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.332211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.332243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.228 [2024-10-15 13:07:26.332479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.228 [2024-10-15 13:07:26.332512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.228 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.332772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.332806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.332984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.333017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.333149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.333183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.333377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.333410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.333653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.333688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.333875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.333907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.334088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.334120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.334383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.334414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.334532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.334565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.334820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.334855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.334996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.335030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.335224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.335257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.335429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.335462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.335576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.335616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.335883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.335916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.336099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.336133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.336253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.336286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.336468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.336501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.336626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.336661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.336767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.336800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.337037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.337071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.337315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.337349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.337534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.337567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.337685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.337718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.337908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.337943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.338181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.338214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.338394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.338427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.338692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.338727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.338959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.338991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.339253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.339287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.339473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.339506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.339692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.229 [2024-10-15 13:07:26.339727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.229 qpair failed and we were unable to recover it. 00:27:06.229 [2024-10-15 13:07:26.340000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.340034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.340252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.340284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.340530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.340565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.340840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.340879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.341004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.341037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.341244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.341277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.341478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.341511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.341701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.341736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.341859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.341891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.342941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.342972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.343159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.343191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.343374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.343406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.343623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.343657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.343865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.343898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.344094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.344127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.344305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.344338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.344523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.344554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.344773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.344805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.344928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.344958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.345073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.345105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.345343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.345375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.345515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.345546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.230 qpair failed and we were unable to recover it. 00:27:06.230 [2024-10-15 13:07:26.345677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.230 [2024-10-15 13:07:26.345710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.345901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.345933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.346125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.346157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.346293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.346325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.346428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.346460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.346647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.346681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.346890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.346923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.347137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.347170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.347297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.347329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.347593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.347634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.347760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.347793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.348006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.348040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.348300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.348333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.348633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.348668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.348805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.348837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.349019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.349052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.349219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.349256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.349446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.349477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.349614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.349646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.349885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.349919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.350103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.350136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.350387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.350420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.350536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.350569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.350833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.350867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.351043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.351076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.351324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.351357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.351565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.351598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.351868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.351900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.352085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.352118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.352292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.352324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.352522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.352553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.231 [2024-10-15 13:07:26.352805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.231 [2024-10-15 13:07:26.352839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.231 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.352947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.352979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.353150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.353183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.353358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.353391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.353575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.353616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.353799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.353831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.353952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.353986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.354158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.354191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.354449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.354483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.354656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.354691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.354958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.354990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.355288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.355322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.355499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.355530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.355736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.355770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.356032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.356066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.356199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.356231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.356481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.356514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.356753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.356787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.356914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.356946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.357116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.357148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.357331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.357364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.357487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.357521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.357719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.357752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.357927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.357959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.358179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.358213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.358479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.358519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.358641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.358675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.358891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.358925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.359108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.232 [2024-10-15 13:07:26.359140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.232 qpair failed and we were unable to recover it. 00:27:06.232 [2024-10-15 13:07:26.359379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.359412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.359620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.359653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.359900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.359934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.360109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.360142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.360378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.360411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.360610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.360643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.360819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.360851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.361096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.361128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.361312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.361344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.361584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.361626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.361835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.361867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.361995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.362027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.362230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.362264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.362452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.362485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.362743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.362777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.362949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.362981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.363161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.363192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.363441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.363473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.363663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.363696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.363826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.363859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.364071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.364104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.364310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.364343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.364552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.364585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.364726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.364758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.364933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.364965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.365217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.365250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.365371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.365404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.365590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.365632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.365830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.365863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.366041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.366073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.366261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.233 [2024-10-15 13:07:26.366294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.233 qpair failed and we were unable to recover it. 00:27:06.233 [2024-10-15 13:07:26.366465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.366498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.366710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.366745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.366924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.366957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.367127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.367160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.367441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.367474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.367667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.367702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.367877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.367911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.368160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.368192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.368304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.368336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.368545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.368578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.368796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.368830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.369074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.369107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.369284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.369317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.369484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.369518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.369652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.369686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.369927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.369959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.370092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.370125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.370260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.370293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.370465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.370498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.370687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.370721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.370908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.370939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.371111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.371144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.371333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.371366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.371539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.371573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.371683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.371733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.372958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.372992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.234 [2024-10-15 13:07:26.373172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.234 [2024-10-15 13:07:26.373205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.234 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.373344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.373384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.373491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.373523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.373701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.373734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.373849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.373881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.374946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.374978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.375219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.375252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.375361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.375394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.375513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.375546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.375696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.375730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.375844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.375876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.376047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.376080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.376265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.376299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.376418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.376452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.376587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.376631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.376911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.376946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.377152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.377184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.377373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.377405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.377579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.377620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.377901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.377934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.378061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.378095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.378268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.378301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.378487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.378519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.378664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.378697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.378825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.378858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.379039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.379072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.379196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.235 [2024-10-15 13:07:26.379228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.235 qpair failed and we were unable to recover it. 00:27:06.235 [2024-10-15 13:07:26.379466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.379498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.379750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.379782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.379896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.379926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.380097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.380131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.380313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.380350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.380473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.380505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.380695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.380732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.380904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.380938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.381066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.381101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.381217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.381256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.381423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.381456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.381580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.381630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.381821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.381853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.382025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.382058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.382289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.382325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.382543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.382579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.382857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.382890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.383131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.383164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.383402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.383434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.383667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.383704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.383898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.383930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.384105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.384136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.384420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.384451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.384644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.384676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.384794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.384825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.385093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.385125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.385337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.385370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.236 qpair failed and we were unable to recover it. 00:27:06.236 [2024-10-15 13:07:26.385513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.236 [2024-10-15 13:07:26.385545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.385724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.385758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.385944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.385976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.386164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.386195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.386315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.386345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.386525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.386557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.386825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.386860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.387006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.387038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.387218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.387256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.387450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.387482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.387617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.387648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.387826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.387855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.388931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.388962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.389248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.389281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.389517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.389555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.389742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.389779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.389893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.389928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.390111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.390151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.390258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.390291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.390480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.390515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.390697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.390731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.390916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.390950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.391123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.391156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.391419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.391453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.391694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.391729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.237 qpair failed and we were unable to recover it. 00:27:06.237 [2024-10-15 13:07:26.391991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.237 [2024-10-15 13:07:26.392024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.392206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.392240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.392368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.392402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.392517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.392550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.392687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.392720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.392904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.392938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.393118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.393151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.393255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.393288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.395837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.395875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.396136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.396170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.396411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.396444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.396621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.396656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.396795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.396828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.397035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.397069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.397192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.397224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.397462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.397496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.397685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.397718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.397922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.397955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.398224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.398258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.398507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.398539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.398786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.398819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.398994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.399027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.399269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.399301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.399433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.399466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.399710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.399745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.399950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.399984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.400189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.400222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.400462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.400495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.400783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.400821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.401010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.401043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.401230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.401264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.401511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.401546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.238 qpair failed and we were unable to recover it. 00:27:06.238 [2024-10-15 13:07:26.401675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.238 [2024-10-15 13:07:26.401715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.401972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.402006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.402190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.402224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.402414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.402449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.402622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.402656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.402875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.402909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.403087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.403122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.403309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.403343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.403476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.403510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.403707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.403742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.403916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.403948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.404195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.404229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.404357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.404389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.404571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.404612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.404739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.404773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.404911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.404944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.405947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.405986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.406106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.406138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.406376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.406410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.406582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.406626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.406825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.406859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.406987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.407020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.407204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.407239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.407376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.407410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.407670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.407705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.407898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.407932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.408053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.408086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.239 [2024-10-15 13:07:26.408215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.239 [2024-10-15 13:07:26.408254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.239 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.408507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.408544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.408757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.408791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.408948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.408983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.409172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.409206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.409320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.409353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.409539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.409575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.409829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.409869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.410117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.410160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.410416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.410455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.410638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.410674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.410903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.410939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.411120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.411163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.411359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.411393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.411591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.411643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.411919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.411958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.412950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.412983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.413101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.413135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.413261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.413295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.413413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.413447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.413634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.413672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.413891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.413932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.414126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.414167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.414353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.414388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.414654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.414690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.414966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.415001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.415250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.415286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.240 qpair failed and we were unable to recover it. 00:27:06.240 [2024-10-15 13:07:26.415582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.240 [2024-10-15 13:07:26.415639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.415762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.415797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.416068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.416103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.416243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.416281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.416552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.416588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.416780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.416818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.417069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.417114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.417297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.417334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.417548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.417589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.417801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.417839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.418089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.418129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.418321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.418357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.418616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.418655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.418846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.418881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.419011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.419044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.419266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.419302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.419432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.419477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.419596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.419648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.419847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.419883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.420012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.420053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.420274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.420308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.420551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.420587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.420821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.420858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.421077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.421120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.421307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.421342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.421598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.421643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.241 [2024-10-15 13:07:26.421855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.241 [2024-10-15 13:07:26.421888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.241 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.422128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.422163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.422349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.422384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.422617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.422653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.422764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.422799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.423091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.423126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.423338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.423372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.423566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.423623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.423764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.423798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.424011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.424045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.424167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.424201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.424463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.424498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.424739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.424776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.424892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.424932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.425040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.425080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.425215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.425250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.425586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.425630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.425917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.425953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.426140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.426174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.426382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.426416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.426615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.426652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.426879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.426913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.427045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.427081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.427322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.427357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.427496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.427534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.427735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.427774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.427974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.428012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.428220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.428258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.428397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.428431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.428628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.428663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.428797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.428846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.429121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.429159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.429371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.242 [2024-10-15 13:07:26.429408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.242 qpair failed and we were unable to recover it. 00:27:06.242 [2024-10-15 13:07:26.429527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.429572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.429782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.429816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.430064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.430101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.430293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.430330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.430568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.430611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.430857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.430892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.431093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.431128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.431335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.431370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.431620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.431668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.431916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.431950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.432154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.432188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.432371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.432408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.432538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.432576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.432809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.432844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.433028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.433071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.433318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.433351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.433500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.433533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.433776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.433812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.434126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.434198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.434356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.434393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.434657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.434694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.434886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.434919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.435187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.435220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.435340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.435373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.435428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9aebb0 (9): Bad file descriptor 00:27:06.243 [2024-10-15 13:07:26.435726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.435797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.435962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.435999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.436190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.436223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.436348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.436382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.436613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.436648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.436826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.436858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.437097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.243 [2024-10-15 13:07:26.437129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.243 qpair failed and we were unable to recover it. 00:27:06.243 [2024-10-15 13:07:26.437337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.437372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.437564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.437596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.437793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.437826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.437996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.438028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.438216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.438249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.438514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.438547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.438752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.438785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.438979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.439012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.439181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.439213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.439321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.439353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.439559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.439590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.439846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.439880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.440010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.440042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.440155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.440188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.440295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.440327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.440595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.440640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.440907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.440939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.441127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.441160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.441400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.441433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.441618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.441659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.441846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.441879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.442090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.442123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.442348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.442380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.442499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.442531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.442706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.442739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.442936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.442969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.443084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.443117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.443387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.443420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.443616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.443649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.443859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.443892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.444067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.444098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.244 [2024-10-15 13:07:26.444223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.244 [2024-10-15 13:07:26.444254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.244 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.444384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.444418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.444629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.444663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.444946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.444978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.445219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.445251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.445374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.445406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.445526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.445558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.445827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.445861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.445986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.446019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.446257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.446290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.446474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.446507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.446742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.446776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.446953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.446985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.447126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.447158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.447398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.447431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.447698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.447732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.447987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.448019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.448148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.448181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.448356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.448389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.448518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.448550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.448803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.448837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.449014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.449047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.449180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.449213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.449451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.449483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.449666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.449701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.449908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.449940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.450118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.450157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.450367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.450400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.450576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.450626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.450905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.450939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.451113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.451147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.451281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.451315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.451485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.451517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.451702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.451737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.451925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.245 [2024-10-15 13:07:26.451956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.245 qpair failed and we were unable to recover it. 00:27:06.245 [2024-10-15 13:07:26.452137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.452170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.452311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.452344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.452523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.452556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.452737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.452771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.452877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.452910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.453120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.453153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.453395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.453428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.453620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.453655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.453760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.453792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.453978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.454012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.454221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.454253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.454494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.454528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.454714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.454749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.454920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.454953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.455148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.455180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.455369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.455401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.455518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.455550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.455693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.455726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.455902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.455935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.456123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.456155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.456284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.456316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.456526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.456559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.456694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.456729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.456847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.456880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.457077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.457110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.457301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.457334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.457534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.457566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.457866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.457900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.246 [2024-10-15 13:07:26.458026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.246 [2024-10-15 13:07:26.458059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.246 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.458179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.458212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.458322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.458354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.458492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.458526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.458649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.458684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.458864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.458902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.459122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.459154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.459268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.459301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.459505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.459539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.459712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.459745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.459919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.459953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.460140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.460173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.460390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.460423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.460614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.460648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.460818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.460851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.461096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.461129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.461308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.461341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.461523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.461556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.461746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.461780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.462941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.462974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.463147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.463179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.463437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.463471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.463590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.463635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.463826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.463859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.464066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.464100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.464341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.464373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.464566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.464610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.464857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.464890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.465023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-10-15 13:07:26.465056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.247 qpair failed and we were unable to recover it. 00:27:06.247 [2024-10-15 13:07:26.465225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.465258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.465377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.465411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.465582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.465626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.465867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.465899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.466164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.466197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.466463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.466497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.466689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.466726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.466966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.466999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.467186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.467219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.467490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.467522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.467791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.467825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.468002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.468041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.468252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.468285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.468524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.468557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.468767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.468801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.469017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.469049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.469159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.469190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.469382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.469416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.469659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.469694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.469893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.469925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.470098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.470131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.470333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.470365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.470535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.470568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.470752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.470786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.470977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.471010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.471201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.471235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.471447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.471480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.471663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.471697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.471825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.471857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.472048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.472081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.472261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.472294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.472481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.472514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.472769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.472803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.472979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.473012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.473274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.473307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.473425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.473458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.248 [2024-10-15 13:07:26.473658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-10-15 13:07:26.473693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.248 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.473875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.473908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.474106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.474139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.474399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.474432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.474671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.474705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.474904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.474936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.475137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.475170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.475445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.475478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.475673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.475729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.475850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.475883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.476066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.476099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.476306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.476338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.476532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.476565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.476745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.476780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.477054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.477087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.477331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.477370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.477586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.477639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.477825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.477858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.477971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.478005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.478212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.478245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.478484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.478517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.478710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.478746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.478929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.478962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.479161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.479194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.479462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.479494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.479758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.479792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.479977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.480010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.480129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.480162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.480370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.480403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.480592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.480633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.480871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.480904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.481114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.481316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.481518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.481721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.481869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.481988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.482022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.249 qpair failed and we were unable to recover it. 00:27:06.249 [2024-10-15 13:07:26.482166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-10-15 13:07:26.482199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.482390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.482423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.482552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.482585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.482706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.482740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.482929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.482962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.483185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.483257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.483522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.483559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.483818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.483853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.484071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.484105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.484289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.484323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.484498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.484531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.484726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.484762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.485006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.485040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.485232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.485265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.485440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.485472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.485594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.485637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.485906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.485940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.486074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.486108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.486291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.486333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.486514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.486548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.486737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.486770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.486963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.486994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.487189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.487221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.487458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.487492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.487619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.487652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.487914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.487947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.488135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.488168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.488307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.488340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.488580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.488629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.488837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.488870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.488990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.489208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.489421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.489628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.489797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.489941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.489973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.490181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.250 [2024-10-15 13:07:26.490214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.250 qpair failed and we were unable to recover it. 00:27:06.250 [2024-10-15 13:07:26.490475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.490508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.490687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.490722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.490905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.490937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.491059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.491091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.491332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.491365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.491488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.491521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.491717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.491752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.492000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.492033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.492290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.492361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.492503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.492541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.492811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.492847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.492967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.493000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.493200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.493233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.493362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.493396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.493628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.493662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.493949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.493981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.494169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.494203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.494322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.494355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.494596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.494643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.494818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.494852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.494986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.495018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.495143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.495192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.495367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.495401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.495576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.495617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.495879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.495913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.496051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.496200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.496430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.496648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.496783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.496967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.497000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-10-15 13:07:26.497251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-10-15 13:07:26.497283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.497541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.497574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.497844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.497878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.498066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.498099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.498224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.498258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.498497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.498529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.498655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.498690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.498901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.498935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.499163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.499408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.499562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.499727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.499889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.499989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.500022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.500209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.500242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.500431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.500464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.500659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.500693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.500892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.500925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.501165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.501198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.501467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.501500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.501700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.501735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.502013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.502047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.502243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.502275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.502419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.502452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.502654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.502688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.502809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.502842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.503115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.503149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.503430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.503463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.503697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.503731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.503937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.503971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.504173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.504213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.504344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.504377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.504500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.504534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.504743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.504778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.504995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.505028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.505218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.505250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.505448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.505481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.505664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.505699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-10-15 13:07:26.505820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-10-15 13:07:26.505852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.506034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.506067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.506359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.506393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.506561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.506594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.506744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.506778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.506980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.507013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.507232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.507265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-10-15 13:07:26.507507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-10-15 13:07:26.507539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.507741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.507777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.507919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.507952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.508137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.508170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.508412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.508444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.508658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.508692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.508959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.508992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.509241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.509274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.509488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.509522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.509647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.509681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.509870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.509903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.510019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.510052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.510273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.510346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.510499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.510535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.510741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.510776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.510888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.510922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.511058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.511092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.511281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.538 [2024-10-15 13:07:26.511314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.538 qpair failed and we were unable to recover it. 00:27:06.538 [2024-10-15 13:07:26.511509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.511543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.511768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.511801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.511977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.512010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.512252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.512285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.512417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.512450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.512576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.512622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.512828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.512862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.513932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.513965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.514141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.514174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.514278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.514310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.514552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.514585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.514781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.514814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.515078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.515111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.515227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.515260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.515436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.515469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.515610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.515644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.515844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.515878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.516016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.516049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.516316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.516350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.516484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.516517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.516779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.516814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.516990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.517140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.517305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.517530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.517744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.517963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.539 [2024-10-15 13:07:26.517996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.539 qpair failed and we were unable to recover it. 00:27:06.539 [2024-10-15 13:07:26.518176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.518209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.518408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.518440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.518667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.518739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.518937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.518975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.519167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.519201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.519439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.519474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.519734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.519769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.519875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.519908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.520092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.520124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.520240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.520273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.520461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.520495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.520681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.520716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.520836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.520869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.521053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.521085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.521282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.521315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.521447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.521489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.521679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.521713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.521900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.521932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.522124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.522159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.522345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.522378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.522562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.522596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.522850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.522883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.523163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.523197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.523337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.523370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.523550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.523584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.523842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.523875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.524000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.524032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.524210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.524244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.524459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.524492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.524737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.524773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.524948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.524981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.525161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.540 [2024-10-15 13:07:26.525195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.540 qpair failed and we were unable to recover it. 00:27:06.540 [2024-10-15 13:07:26.525322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.525355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.525555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.525588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.525796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.525829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.526092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.526124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.526365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.526398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.526640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.526676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.526811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.526844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.527045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.527078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.527334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.527368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.527494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.527527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.527792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.527826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.528033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.528066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.528319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.528352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.528471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.528505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.528745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.528780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.528960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.528993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.529248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.529282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.529408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.529440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.529718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.529754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.529937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.529970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.530151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.530184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.530284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.530317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.530517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.530553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.530860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.530896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.531027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.531061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.532477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.532532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.532845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.532883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.533151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.533186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.533373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.533406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.533647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.533682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.533861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.533894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.541 [2024-10-15 13:07:26.534086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.541 [2024-10-15 13:07:26.534120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.541 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.534249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.534282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.534469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.534502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.534759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.534794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.534966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.535194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.535411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.535620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.535780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.535928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.535961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.536142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.536176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.536369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.536402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.536610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.536645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.536838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.536872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.537046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.537079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.537284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.537318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.537520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.537553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.537704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.537739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.537876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.537910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.538108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.538148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.538269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.538302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.538495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.538528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.538796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.538831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.538960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.538993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.539263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.539296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.539511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.539544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.539674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.539709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.539840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.539872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.539976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.540009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.540195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.540229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.540402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.540434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.540626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.540661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.542 qpair failed and we were unable to recover it. 00:27:06.542 [2024-10-15 13:07:26.540787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.542 [2024-10-15 13:07:26.540821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.541078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.541111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.541298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.541331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.541571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.541612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.541738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.541771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.541998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.542032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.542147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.542182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.542374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.542408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.542581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.542626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.542819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.542853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.542967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.543000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.543192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.543225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.543465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.543500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.543738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.543773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.543987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.544021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.544307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.544340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.544517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.544550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.544732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.544766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.544977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.545010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.545240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.545273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.545515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.545548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.545681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.545715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.545835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.545868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.546013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.546047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.546165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.546200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.546319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.546353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.546462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.543 [2024-10-15 13:07:26.546495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.543 qpair failed and we were unable to recover it. 00:27:06.543 [2024-10-15 13:07:26.546693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.546733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.546872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.546905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.547090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.547126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.547377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.547412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.547534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.547568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.547691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.547727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.547939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.547974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.548157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.548191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.548382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.548415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.550370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.550429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.550738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.550776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.551022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.551056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.551244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.551278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.551400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.551432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.551626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.551661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.551901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.551934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.552129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.552163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.552303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.552336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.552472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.552506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.552716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.552751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.552874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.552907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.553105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.553138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.553383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.553418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.553637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.553674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.553808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.553842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.553981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.554015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.554150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.554183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.556078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.556137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.556451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.556485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.556674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.556709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.556846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.556879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.557004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.544 [2024-10-15 13:07:26.557036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.544 qpair failed and we were unable to recover it. 00:27:06.544 [2024-10-15 13:07:26.557221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.557255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.557440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.557473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.557679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.557713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.557896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.557929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.558108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.558142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.558267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.558299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.558409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.558442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.558575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.558619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.558832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.558871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.559918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.559951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.560134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.560167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.560376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.560409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.560662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.560696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.560814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.560847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.560977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.561194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.561338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.561510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.561753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.561909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.561943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.562103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.562321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.562544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.562718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.562869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.562982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.563012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.563194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.563224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.564938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.545 [2024-10-15 13:07:26.564990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.545 qpair failed and we were unable to recover it. 00:27:06.545 [2024-10-15 13:07:26.565310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.565342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.565527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.565558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.565764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.565796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.565942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.565974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.566097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.566130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.566253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.566286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.566571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.566614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.566800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.566833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.566951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.566984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.567116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.567149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.567256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.567289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.567458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.567491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.567634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.567665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.567837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.567870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.568911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.568944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.569128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.569161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.569280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.569313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.569488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.569521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.569699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.569734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.569843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.569876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.570062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.570094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.570230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.570263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.570369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.570402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.570580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.570632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.570812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.570843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.571010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.571043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.571161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.571195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.571404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.546 [2024-10-15 13:07:26.571436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.546 qpair failed and we were unable to recover it. 00:27:06.546 [2024-10-15 13:07:26.571624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.571655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.571893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.571923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.572156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.572187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.572316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.572345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.572466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.572496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.572678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.572710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.572842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.572873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.573044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.573077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.573261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.573294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.573469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.573501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.573640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.573672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.573862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.573892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.574927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.574959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.575135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.575168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.575298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.575329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.575503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.575536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.575644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.575679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.575859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.575898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.576941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.576970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.577136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.577166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.577350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.577382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.577499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.547 [2024-10-15 13:07:26.577532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.547 qpair failed and we were unable to recover it. 00:27:06.547 [2024-10-15 13:07:26.577721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.577755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.577946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.577978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.578195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.578228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.578421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.578453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.578598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.578638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.578745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.578775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.578880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.578910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.579936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.579969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.580139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.580172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.580294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.580326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.580539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.580572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.580783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.580815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.581010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.581041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.581155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.581186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.581425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.581456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.581553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.581584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.581757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.581788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.582952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.582982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.583159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.583190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.583298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.583334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.583506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.583538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.548 qpair failed and we were unable to recover it. 00:27:06.548 [2024-10-15 13:07:26.583772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.548 [2024-10-15 13:07:26.583806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.583996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.584880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.584995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.585027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.585204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.585237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.585427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.585459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.585629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.585663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.585789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.585818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.586057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.586086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.586366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.586411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.586594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.586651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.586770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.586803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.586986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.587019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.587196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.587228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.587361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.587395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.587590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.587632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.587817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.587847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.588112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.588245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.588479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.588641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.588863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.588996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.589029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.589139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.549 [2024-10-15 13:07:26.589171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.549 qpair failed and we were unable to recover it. 00:27:06.549 [2024-10-15 13:07:26.589299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.589331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.589572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.589616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.589798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.589827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.589950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.589980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.590909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.590938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.591050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.591085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.591299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.591330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.591445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.591475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.591597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.591637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.591806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.591837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.592010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.592040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.592219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.592249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.592418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.592451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.592704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.592739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.592913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.592945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.593951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.593983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.594090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.594123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.594368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.594401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.595807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.595862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.596078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.596114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.596376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.596409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.596675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.550 [2024-10-15 13:07:26.596710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.550 qpair failed and we were unable to recover it. 00:27:06.550 [2024-10-15 13:07:26.596853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.596884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.597169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.597202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.597390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.597422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.597559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.597591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.597738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.597771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.597948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.598020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.598177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.598214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.598393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.598428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.598550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.598582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.598736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.598770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.598986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.599019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.599148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.599182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.599410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.599442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.599559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.599592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.599817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.599850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.599971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.600003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.600131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.600165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.600381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.600413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.600532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.600580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.600818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.600853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.600976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.601009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.601197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.601229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.601414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.601449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.601662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.601696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.601804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.601837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.602075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.602107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.602279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.602312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.602446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.602479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.602617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.602652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.602900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.602933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.603042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.603075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.603194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.551 [2024-10-15 13:07:26.603226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.551 qpair failed and we were unable to recover it. 00:27:06.551 [2024-10-15 13:07:26.603423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.603456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.603587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.603631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.603753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.603785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.603958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.603990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.604183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.604217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.604347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.604380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.604498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.604531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.604652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.604687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.604811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.604842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.605051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.605085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.605325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.605359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.605478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.605510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.605697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.605730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.605853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.605887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.606017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.606050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.606230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.606264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.606546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.606579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.606716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.606750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.606881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.606916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.607080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.607299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.607457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.607672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.607882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.607988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.608136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.608361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.608580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.608735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.608891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.608923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.609105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.552 [2024-10-15 13:07:26.609137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.552 qpair failed and we were unable to recover it. 00:27:06.552 [2024-10-15 13:07:26.609310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.609343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.609542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.609576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.609708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.609742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.609849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.609881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.610058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.610091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.610266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.610299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.610406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.610438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.610632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.610668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.610858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.610890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.611076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.611109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.611381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.611414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.611520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.611554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.611680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.611714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.611910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.611942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.612922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.612956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.613127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.613160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.613270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.613303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.613458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.613530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.613757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.613794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.613929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.613960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.614090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.614122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.614294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.614327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.614438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.614470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.614674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.614707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.614882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.614914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.615184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.615216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.615335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.615367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.553 [2024-10-15 13:07:26.615559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.553 [2024-10-15 13:07:26.615591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.553 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.615800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.615842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.615958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.615991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.616189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.616228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.616352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.616385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.616589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.616634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.616828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.616860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.617039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.617071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.617195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.617227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.617352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.617384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.617509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.617540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.617796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.617830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.618061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.618094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.618339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.618372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.618500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.618532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.618714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.618749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.618887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.618918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.619857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.619890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.620069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.620102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.620208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.620240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.620506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.620540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.620671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.620705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.620833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.620864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.554 qpair failed and we were unable to recover it. 00:27:06.554 [2024-10-15 13:07:26.621046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.554 [2024-10-15 13:07:26.621078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.621183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.621216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.621373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.621444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.621704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.621744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.621864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.621899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.622889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.622923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.623047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.623080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.623251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.623284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.623398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.623431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.624792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.624846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.625048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.625080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.625266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.625300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.625567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.625614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.625744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.625776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.625974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.626127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.626290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.626445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.626596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.626830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.626864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.627042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.627075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.627215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.627248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.627423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.627457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.627585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.627630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.627805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.627845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.628065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.628099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.628206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.628239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.628340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.555 [2024-10-15 13:07:26.628373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.555 qpair failed and we were unable to recover it. 00:27:06.555 [2024-10-15 13:07:26.628492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.628524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.628708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.628743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.628861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.628894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.629873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.629905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.630804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.630838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.631847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.631879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.632940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.632972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.633122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.633273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.633441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.633656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.633863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.556 qpair failed and we were unable to recover it. 00:27:06.556 [2024-10-15 13:07:26.633993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-10-15 13:07:26.634027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.634155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.634187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.634372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.634404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.634514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.634547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.634730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.634763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.634994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.635067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.635271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.635308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.635454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.635488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.635689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.635724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.635853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.635887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.636074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.636107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.636276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.636307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.636439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.636472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.636682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.636720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.636923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.636956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.637130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.637163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.637351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.637384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.637557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.637589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.637730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.637764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.637895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.637929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.638131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.638403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.638572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.638736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.638875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.638996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.639029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.639136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.639169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.639350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.639383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.639623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.639658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.639843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.557 [2024-10-15 13:07:26.639875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.557 qpair failed and we were unable to recover it. 00:27:06.557 [2024-10-15 13:07:26.640064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.640097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.640207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.640239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.640445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.640482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.640631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.640666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.640867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.640899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.641107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.641142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.641337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.641371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.641549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.641582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.641763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.641796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.641924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.641957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.642066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.642098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.642212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.642244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.642489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.642522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.642770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.642804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.642923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.642956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.643132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.643171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.643346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.643378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.643570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.643614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.643799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.643832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.643939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.643970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.644088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.644121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.644237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.644269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.644460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.644492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.644729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.644764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.644885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.558 [2024-10-15 13:07:26.644918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.558 qpair failed and we were unable to recover it. 00:27:06.558 [2024-10-15 13:07:26.645040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.645197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.645405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.645626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.645789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.645947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.645979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.646130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.646282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.646495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.646717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.646864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.646975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.647007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.647192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.647226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.647399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.647433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.647636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.647670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.647861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.647893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.648013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.648045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.648230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.648301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.648442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.648479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.648697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.648735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.648968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.649145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.649294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.649515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.649684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.649904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.649936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.650127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.650160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.650336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.650369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.650556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.650588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.650838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.650871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.559 [2024-10-15 13:07:26.651119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.559 [2024-10-15 13:07:26.651162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.559 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.651286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.651320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.651496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.651529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.651658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.651692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.651808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.651840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.652050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.652199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.652411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.652564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.652798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.652997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.653029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.653162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.653195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.653371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.653404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.653593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.653638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.653852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.653886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.654087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.654120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.654231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.654264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.654452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.654486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.654626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.654661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.654840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.654873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.655058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.655091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.655275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.655309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.655564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.655597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.655745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.655778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.655901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.655934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.656056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.656089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.560 [2024-10-15 13:07:26.656261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.560 [2024-10-15 13:07:26.656294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.560 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.656447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.656517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.656774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.656846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.657050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.657087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.657274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.657308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.657426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.657459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.657638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.657673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.657862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.657896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.658863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.658895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.659939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.659971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.660900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.660933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.661116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.661148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.661259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.661291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.661463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.661502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.561 qpair failed and we were unable to recover it. 00:27:06.561 [2024-10-15 13:07:26.661624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.561 [2024-10-15 13:07:26.661658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.661779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.661812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.661929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.661961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.662151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.662185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.662317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.662350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.662474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.662507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.662696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.662729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.662905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.662938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.663959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.663991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.664167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.664199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.664304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.664337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.664454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.664487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.664624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.664658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.664792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.664825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.665874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.665998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.666804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.562 qpair failed and we were unable to recover it. 00:27:06.562 [2024-10-15 13:07:26.666993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.562 [2024-10-15 13:07:26.667027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.667152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.667185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.667288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.667321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.667444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.667477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.667719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.667755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.667993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.668146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.668363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.668515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.668692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.668952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.668986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.669164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.669197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.669378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.669411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.669518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.669551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.669675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.669710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.669817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.669849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.670957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.670991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.671116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.671155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.671340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.671373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.671495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.671528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.671705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.671739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.671844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.671877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.672151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.672184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.672298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.672330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.672464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.672496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.672622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.672657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.563 [2024-10-15 13:07:26.672777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.563 [2024-10-15 13:07:26.672808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.563 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.672937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.672970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.673930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.673963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.674152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.674185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.674302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.674333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.674458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.674490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.674673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.674708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.674886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.674917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.675061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.675269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.675418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.675560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.675789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.675958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.676029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.676157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.676194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.676319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.676353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.676538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.676570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.676821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.676892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.677949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.677982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.678168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.678201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.678324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.678357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.564 qpair failed and we were unable to recover it. 00:27:06.564 [2024-10-15 13:07:26.678471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.564 [2024-10-15 13:07:26.678504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.678713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.678748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.678939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.678971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.679241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.679275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.679460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.679492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.679678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.679712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.679823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.679856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.680922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.680954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.681098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.681390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.681527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.681738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.681874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.681991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.682821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.682996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.683139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.683343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.683504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.683708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.683921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.683953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.684065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.684098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.684206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.565 [2024-10-15 13:07:26.684239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.565 qpair failed and we were unable to recover it. 00:27:06.565 [2024-10-15 13:07:26.684359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.684392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.684509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.684543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.684664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.684699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.684819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.684852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.684956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.684989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.685159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.685192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.685294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.685327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.685440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.685472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.685590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.685636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.685826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.685869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.686957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.686990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.687126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.687159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.687329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.687362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.687546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.687579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.687713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.687747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.687985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.688018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.688200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.688233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.688344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.688377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.688596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.566 [2024-10-15 13:07:26.688681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.566 qpair failed and we were unable to recover it. 00:27:06.566 [2024-10-15 13:07:26.688887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.688924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.689106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.689139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.689387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.689420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.689667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.689702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.689820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.689854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.689996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.690030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.690208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.690241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.690371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.690404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.690525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.690558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.690806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.690839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.691023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.691055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.691239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.691274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.691465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.691508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.691640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.691674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.691858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.691891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.692044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.692180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.692338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.692497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.692730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.692973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.693831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.693994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.694027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.694274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.694307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.567 qpair failed and we were unable to recover it. 00:27:06.567 [2024-10-15 13:07:26.694432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.567 [2024-10-15 13:07:26.694466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.694653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.694687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.694806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.694838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.694958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.694992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.695106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.695139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.695382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.695415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.695615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.695649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.695774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.695806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.696931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.696970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.697096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.697128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.697255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.697290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.697541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.697577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.697713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.697751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.697860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.697892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.698001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.698034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.698246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.698301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.698410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.698448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.698666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.698707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.698885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.698921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.699917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.699954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.700076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.700109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.700291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-10-15 13:07:26.700324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.568 qpair failed and we were unable to recover it. 00:27:06.568 [2024-10-15 13:07:26.700500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.700533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.700726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.700761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.700872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.700905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.701084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.701117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.701245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.701277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.701476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.701510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.701704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.701740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.701855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.701889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.702068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.702101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.702283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.702318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.702434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.702468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.702592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.702633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.702815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.702848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.703859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.703892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.704884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.704917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.705866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.569 [2024-10-15 13:07:26.705971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-10-15 13:07:26.706004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.569 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.706194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.706236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.706355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.706388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.706590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.706635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.706780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.706813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.706937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.706969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.707910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.707944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.708145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.708178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.708365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.708398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.708517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.708550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.708747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.708782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.708981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.709905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.709938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.710057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.710090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.710375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.710409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.710520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.710554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.710702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.710736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.710937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.710971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.711095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.711139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.711249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.711282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.711396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.711428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.711714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.711749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.711867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.711900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.712022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.712054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.712230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.712263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.712430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.712463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.570 qpair failed and we were unable to recover it. 00:27:06.570 [2024-10-15 13:07:26.712580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-10-15 13:07:26.712626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.712821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.712854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.712977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.713011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.713197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.713230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.713413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.713446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.713647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.713682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.713802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.713835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.714860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.714983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.715915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.715953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.716058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.716091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.716296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.716329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.716457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.716490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.716658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.716693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.716805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.716838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.717009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.717042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.717146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.717180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.717370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.571 [2024-10-15 13:07:26.717402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.571 qpair failed and we were unable to recover it. 00:27:06.571 [2024-10-15 13:07:26.717533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.717567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.717749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.717784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.717889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.717921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.718961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.718993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.719109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.719142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.719260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.719293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.719408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.719441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.719649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.719684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.719855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.719888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.720848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.720881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.721880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.721913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.722088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.722121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.722291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.722324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.722454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.722487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.722731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.722765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.722876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.722909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.572 [2024-10-15 13:07:26.723042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.572 [2024-10-15 13:07:26.723075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.572 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.723198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.723236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.723474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.723507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.723686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.723721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.723900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.723933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.724914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.724947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.725064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.725097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.725275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.725308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.725551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.725584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.725769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.725803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.725937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.725970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.726080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.726113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.726300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.726332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.726442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.726475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.726593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.726635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.726884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.726917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.727948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.727981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.728155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.728188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.728320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.728358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.728485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.728517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.728690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.728724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.728909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.728941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.729182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.729214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.729341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.729374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.729477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.729509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.729682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.729717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.573 qpair failed and we were unable to recover it. 00:27:06.573 [2024-10-15 13:07:26.729830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.573 [2024-10-15 13:07:26.729862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.730033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.730066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.730253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.730286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.730494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.730528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.730712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.730746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.730922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.730954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.731167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.731325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.731528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.731680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.731823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.731997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.732030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.732286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.732320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.732493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.732525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.732702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.732737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.732941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.732975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.733145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.733178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.733297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.733330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.733432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.733466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.733707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.733742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.734030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.734063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.734196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.734230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.734523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.734556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.734684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.734719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.734909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.734943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.735067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.735100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.735227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.735261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.735472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.735506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.735638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.735673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.735844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.735877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.736067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.736100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.736229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.736262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.736432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.574 [2024-10-15 13:07:26.736465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.574 qpair failed and we were unable to recover it. 00:27:06.574 [2024-10-15 13:07:26.736703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.736743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.736870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.736904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.737165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.737198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.737392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.737425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.737558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.737592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.737722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.737755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.737971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.738004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.738193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.738226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.738343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.738376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.738566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.738599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.738799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.738833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.739024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.739056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.739237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.739270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.739374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.739407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.739618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.739651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.739852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.739886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.740068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.740101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.740342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.740375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.740506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.740539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.740777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.740812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.740983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.741016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.741300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.741332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.741452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.741483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.741659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.741693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.741934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.741967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.742107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.742139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.742249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.742282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.742484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.742522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.742708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.742742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.742921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.742954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.743219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.743252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.743436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.743469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.743655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.743689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.743888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.743921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.744055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.575 [2024-10-15 13:07:26.744089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.575 qpair failed and we were unable to recover it. 00:27:06.575 [2024-10-15 13:07:26.744229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.744262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.744466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.744499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.744701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.744735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.745026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.745059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.745250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.745283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.745401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.745435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.745628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.745663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.745852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.745884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.746063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.746097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.746271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.746304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.746548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.746581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.746872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.746906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.747025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.747058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.747180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.747213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.747474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.747505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.747708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.747742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.747928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.747961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.748079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.748111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.748312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.748345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.748467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.748501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.748753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.748787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.748895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.748928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.749193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.749225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.749343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.749376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.749584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.749626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.749753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.749786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.749916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.749950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.750152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.750185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.750298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.750331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.750510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.750543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.750698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.750732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.750901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.576 [2024-10-15 13:07:26.750935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.576 qpair failed and we were unable to recover it. 00:27:06.576 [2024-10-15 13:07:26.751123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.751154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.751287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.751326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.751564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.751598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.751720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.751753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.751935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.751969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.752090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.752123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.752237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.752270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.752558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.752591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.752798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.752831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.753032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.753064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.753195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.753228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.753474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.753507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.753684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.753719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.753838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.753871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.754004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.754038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.754171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.754203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.754441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.754474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.754658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.754692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.754828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.754861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.755102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.755259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.755482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.755652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.755795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.755976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.756009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.756179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.756212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.756390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.756422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.756694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.756728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.756844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.756883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.757075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.757107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.757312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.757345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.757452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.757486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.577 [2024-10-15 13:07:26.757674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.577 [2024-10-15 13:07:26.757708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.577 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.757896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.757928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.758097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.758130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.758394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.758426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.758668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.758701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.758838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.758871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.759059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.759092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.759264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.759297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.759495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.759529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.759649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.759684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.759849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.759922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.760187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.760225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.760448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.760480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.760720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.760755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.760893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.760926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.761140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.761174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.761369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.761401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.761516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.761548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.761734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.761770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.761897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.761930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.762057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.762090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.762281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.762313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.762515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.762548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.762829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.762873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.763114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.763148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.763335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.763368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.763616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.763650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.763829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.763863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.764063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.764096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.764299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.764331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.764593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.764636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.764843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.764876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.765155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.765188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.765392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.765425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.765610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.765643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-10-15 13:07:26.765859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-10-15 13:07:26.765892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.766159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.766192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.766331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.766364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.766538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.766571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.766769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.766801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.766989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.767022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.767203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.767235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.767417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.767451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.767582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.767627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.767871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.767904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.768090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.768123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.768360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.768392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.768562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.768595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.768816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.768849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.769125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.769159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.769345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.769378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.769595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.769639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.769909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.769942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.770072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.770104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.770284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.770318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.770547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.770580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.770866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.770899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.771148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.771181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.771375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.771408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.771649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.771684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.771876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.771910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.772155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.772187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.772402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.772434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.772622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.772663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.772935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.772968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.773229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.773262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.773399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.773431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.773608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.773642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.773836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.773867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.773992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.774025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.774146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.774179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.774365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.774398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.774587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-10-15 13:07:26.774629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-10-15 13:07:26.774750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.774781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.774977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.775011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.775134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.775166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.775415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.775448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.775635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.775670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.775861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.775892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.776128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.776160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.776301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.776333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.776570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.776622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.776839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.776872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.776978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.777010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.777195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.777228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.777400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.777433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.777700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.777733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.777866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.777898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.778008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.778041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.778242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.778275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.778542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.778627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.778867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.778904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.779090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.779123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.779232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.779264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.779432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.779464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.779671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.779705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.779882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.779914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.780037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.780071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.780256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.780290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.780478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.780510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.780755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.780789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.780903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.780934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.781034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.781067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.781319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.781361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.781632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.781665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.781781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.781812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.781999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.782031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.782277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.782309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.782479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.782510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.782756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.782791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.782907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.782940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.783146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.783178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.783298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.783330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.783450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.783483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.783657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-10-15 13:07:26.783692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-10-15 13:07:26.783875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.783909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.784021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.784053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.784188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.784220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.784482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.784515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.784781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.784815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.784989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.785210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.785368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.785581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.785737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.785957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.785990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.786173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.786205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.786470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.786504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.786680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.786713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.786894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.786925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.787146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.787216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.787449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.787487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.787679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.787715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.788005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.788038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.788163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.788197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.788438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.788472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.788654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.788688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.788807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.788840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.789016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.789050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.789230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.789263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.789463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.789495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.789682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.789719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.789997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.790030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.790278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.790311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.790439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.790473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.790675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.790711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.790955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.790988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.791122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.791156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.791396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.791428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.791546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-10-15 13:07:26.791579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-10-15 13:07:26.791781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.791815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.791986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.792019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.792211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.792244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.792355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.792389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.792639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.792674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.792859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.792891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.793095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.793127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.793367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.793407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.793611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.793644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.793819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.793851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.794120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.794153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.794348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.794381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.794634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.794668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.794931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.794964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.795258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.795291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.795496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.795528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.795722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.795756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.795999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.796033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.796224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.796256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.796428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.796461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.796716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.796750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.796880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.796914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.797026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.797060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.797185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.797216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.797440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.797473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.797645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.797680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.797873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.797905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.798098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.798130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.798311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.798343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.798550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.798582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.798737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.798772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.798941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.798974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.799168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.799201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.799391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.799424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.799612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.799647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.799857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.799891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-10-15 13:07:26.800013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-10-15 13:07:26.800046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.800181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.800214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.800400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.800433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.800545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.800579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.800789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.800821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.801059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.801092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.801231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.801264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.801448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.801481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.801584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.801623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.801864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.801897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.802151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.802182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.802383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.802416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.802635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.802675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.802876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.802910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.803169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.803202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.803440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.803472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.803590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.803635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.803905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.803938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.804147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.804179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.804367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.804400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.804515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.804548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.804811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.804845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.805031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.805064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.805306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.805338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.805614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.805648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.805772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.805804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.805993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.806026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.806222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.806255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.806432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.806465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.806596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.806642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.806847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.806879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.807073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.807105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.807381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.807415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.807591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.807635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.807912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.807944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.808119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.808151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-10-15 13:07:26.808373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-10-15 13:07:26.808407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.808515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.808547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.808741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.808776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.808880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.808919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.809108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.809142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.809333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.809366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.809559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.809592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.809851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.809885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.810123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.810155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.810351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.810383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.810504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.810538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.810749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.810784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.810997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.811030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.811261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.811293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.811475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.811508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.811700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.811734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.811921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.811954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.812149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.812183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.812298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.812330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.812537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.812570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.812752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.812786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.812995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.813028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.813305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.813338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.813456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.813489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.813621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.813655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.813899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.813932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.814122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.814154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.814331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.814365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.814624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.814659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.814881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.814915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.815096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.815128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.815316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.815350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.815535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.815568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.815698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.815732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.815923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.815956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.816159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.816192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.816458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.816492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-10-15 13:07:26.816758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-10-15 13:07:26.816792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.817002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.817034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.817240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.817273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.817402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.817435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.817702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.817736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.817978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.818011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.818139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.818172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.818292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.818330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.818461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.818494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.818679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.818713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.818979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.819011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.819211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.819243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.819373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.819406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.819582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.819625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.819810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.819844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.820108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.820142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.820329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.820362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.820547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.820580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.820707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.820740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.820983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.821016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.821132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.821164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.821388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.821419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.821547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.821580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.821772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.821806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.821993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.822027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.822233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.822265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.822389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.822422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.822539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.822573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.822835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.822870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.823114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.823146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.823332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.823364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.823490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.823522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.823712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.823746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.823881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.823914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.824091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.824130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.824389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.824422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.824630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.824665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.824785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-10-15 13:07:26.824818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-10-15 13:07:26.825026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.825058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.825198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.825231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.825354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.825387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.825516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.825548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.825772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.825805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.825989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.826021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.826191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.826224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.826437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.826470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.826609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.826643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.826829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.826863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.826979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.827012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.827140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.827173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.827349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.827382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.827621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.827656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.827847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.827881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.828084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.828116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.828243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.828276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.828468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.828501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.828738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.828771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.828984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.829018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.829205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.829237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.829358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.829390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.829680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.829714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.829916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.829950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.830135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.830168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.830306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.830338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.830461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.830494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.830701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.830734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.830844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.830877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.831001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.831034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.831322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.831355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.831494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.831527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.831793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.831828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-10-15 13:07:26.832013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-10-15 13:07:26.832046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.832177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.832211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.832395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.832428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.832638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.832673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.832821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.832860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.832995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.833027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.833158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.833191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.833384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.833417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.833554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.833586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.833711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.833745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.833988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.834021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.834262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.834295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.834480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.834513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.834632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-10-15 13:07:26.834666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-10-15 13:07:26.834785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.834818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.834995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.835028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.835202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.835235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.835473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.835506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.835629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.835664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.835915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.835948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.870 [2024-10-15 13:07:26.836060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.870 [2024-10-15 13:07:26.836093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.870 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.836226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.836259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.836508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.836541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.836673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.836707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.836829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.836861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.836987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.837020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.837225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.837257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.837439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.837471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.837677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.837712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.837982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.838015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.838148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.838181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.838357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.838395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.838548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.838582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.838809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.838842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.839030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.839063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.839256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.839288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.839530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.839563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.839759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.839794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.839913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.839947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.840126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.840159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.840343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.840376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.840553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.840585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.840794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.840827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.841003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.841036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.841209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.841243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.841355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.841389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.841629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.841664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.841777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.841809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.842050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.842083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.842328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.842361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.842533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.842566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.842699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.842732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.842848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.871 [2024-10-15 13:07:26.842881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.871 qpair failed and we were unable to recover it. 00:27:06.871 [2024-10-15 13:07:26.843057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.843090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.843263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.843295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.843533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.843567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.843787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.843821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.844009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.844042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.844246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.844279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.844523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.844555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.844802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.844835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.844958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.844991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.845120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.845152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.845293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.845327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.845526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.845559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.845746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.845778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.846039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.846071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.846192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.846225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.846417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.846449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.846627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.846662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.846782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.846815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.847021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.847054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.847316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.847354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.847532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.847565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.847694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.847728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.847936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.847970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.848088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.848121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.848291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.848323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.848536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.848570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.848757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.848791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.849002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.849035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.872 qpair failed and we were unable to recover it. 00:27:06.872 [2024-10-15 13:07:26.849222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.872 [2024-10-15 13:07:26.849255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.849494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.849527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.849810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.849845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.850019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.850051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.850242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.850275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.850500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.850535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.850711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.850744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.850938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.850971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.851239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.851271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.851483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.851516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.851722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.851756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.852013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.852046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.852287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.852320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.852517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.852549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.852764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.852798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.852981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.853013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.853193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.853226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.853410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.853442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.853638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.853673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.853865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.853898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.854074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.854107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.854285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.854319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.854454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.854487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.854679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.854713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.854837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.854870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.855109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.855142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.855325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.855357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.855619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.855653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.855872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.873 [2024-10-15 13:07:26.855905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.873 qpair failed and we were unable to recover it. 00:27:06.873 [2024-10-15 13:07:26.856036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.856069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.856266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.856299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.856483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.856515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.856705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.856740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.856982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.857015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.857219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.857252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.857456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.857489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.857620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.857654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.857853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.857885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.858008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.858041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.858231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.858264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.858508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.858541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.858769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.858803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.859051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.859083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.859334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.859368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.859559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.859591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.859744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.859775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.859989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.860023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.860199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.860231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.860432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.860465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.860649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.860684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.860871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.860903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.861115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.861148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.861333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.861365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.861623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.861658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.861919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.861951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.862140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.862173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.862349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.862383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.862583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.862625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.862741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.862773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.863015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.863053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.874 [2024-10-15 13:07:26.863234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.874 [2024-10-15 13:07:26.863267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.874 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.863531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.863564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.863772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.863807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.863984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.864017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.864258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.864290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.864422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.864455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.864695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.864730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.864985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.865016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.865209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.865242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.865500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.865533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.865771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.865805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.865994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.866026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.866211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.866243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.866456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.866490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.866685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.866718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.866889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.866922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.867095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.867127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.867300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.867333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.867504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.867536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.867711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.867746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.868024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.868056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.868186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.868220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.868507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.868541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.868739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.868775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.868963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.868995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.869130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.869164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.869426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.869458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.869655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.869688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.869956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.869989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.870103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.870136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.870375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.870407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.870599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.875 [2024-10-15 13:07:26.870642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.875 qpair failed and we were unable to recover it. 00:27:06.875 [2024-10-15 13:07:26.870822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.870855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.871117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.871151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.871340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.871373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.871551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.871585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.871793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.871826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.871996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.872028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.872267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.872300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.872486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.872517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.872633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.872673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.872844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.872877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.873073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.873105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.873225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.873258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.873443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.873475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.873615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.873649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.873822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.873856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.874026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.874059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.874210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.874244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.874508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.874540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.874725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.874759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.874878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.874910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.875101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.875134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.875369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.875401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.875645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.875678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.875851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.875885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.876 [2024-10-15 13:07:26.876039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.876 [2024-10-15 13:07:26.876072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.876 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.876270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.876303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.876570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.876636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.876890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.876923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.877103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.877135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.877393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.877426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.877636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.877671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.877807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.877839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.878027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.878060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.878274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.878307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.878475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.878509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.878642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.878681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.878798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.878828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.879093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.879125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.879236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.879266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.879455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.879488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.879755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.879790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.879964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.879996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.880106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.880138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.880336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.880369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.880586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.880637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.880813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.880845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.881061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.881094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.881333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.881365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.881556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.881590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.881801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.881835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.882012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.882044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.882177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.882210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.882385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.882418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.882590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.882632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.882814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.882847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.883114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.883147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.883415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.877 [2024-10-15 13:07:26.883449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.877 qpair failed and we were unable to recover it. 00:27:06.877 [2024-10-15 13:07:26.883638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.883672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.883890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.883924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.884190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.884223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.884399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.884431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.884618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.884652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.884860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.884892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.885089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.885121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.885368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.885401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.885584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.885626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.885812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.885846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.886066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.886234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.886438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.886574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.886808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.886999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.887032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.887148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.887180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.887465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.887499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.887758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.887793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.887980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.888018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.888200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.888232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.888440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.888473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.888724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.888758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.889020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.889052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.889259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.889291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.889531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.889563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.889763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.889797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.890010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.890043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.890233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.890266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.878 [2024-10-15 13:07:26.890505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.878 [2024-10-15 13:07:26.890537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.878 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.890731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.890766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.890970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.891004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.891187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.891220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.891414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.891448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.891685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.891736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.891938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.891970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.892213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.892246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.892368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.892401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.892585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.892639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.892762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.892795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.892972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.893116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.893353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.893583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.893744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.893964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.893997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.894267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.894306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.894481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.894514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.894705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.894740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.894870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.894902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.895018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.895050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.895235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.895267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.895507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.895540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.895724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.895759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.895946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.895979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.896245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.896278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.896406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.896438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.896625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.896659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.896829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.896862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.897127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.897160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.897375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.897409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.897619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.879 [2024-10-15 13:07:26.897652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.879 qpair failed and we were unable to recover it. 00:27:06.879 [2024-10-15 13:07:26.897785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.897818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.898074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.898107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.898321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.898353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.898530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.898563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.898698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.898732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.898920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.898953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.899064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.899097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.899314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.899348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.899558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.899590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.899728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.899761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.899974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.900007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.900130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.900162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.900409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.900442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.900647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.900683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.900872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.900905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.901954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.901987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.902163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.902196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.902331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.902364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.902625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.902658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.902844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.902877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.903090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.903129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.903305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.903338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.903525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.903558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.903760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.903794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.904057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.904090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.904359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.904392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.904669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.904703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.880 qpair failed and we were unable to recover it. 00:27:06.880 [2024-10-15 13:07:26.904946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.880 [2024-10-15 13:07:26.904980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.905244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.905276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.905465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.905498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.905744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.905778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.906042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.906074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.906285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.906318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.906454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.906486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.906698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.906732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.906914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.906946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.907214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.907247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.907439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.907471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.907760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.907794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.908058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.908091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.908227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.908259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.908458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.908496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.908690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.908724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.908834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.908865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.909045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.909081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.909285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.909318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.909503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.909536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.909725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.909766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.909953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.909985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.910251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.910284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.910561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.910594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.910734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.910768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.910975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.911009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.911197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.911230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.911413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.911446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.911634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.911669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.911859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.911891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.881 [2024-10-15 13:07:26.912070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.881 [2024-10-15 13:07:26.912103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.881 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.912309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.912342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.912583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.912625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.912826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.912860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.913151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.913223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.913427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.913464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.913736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.913771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.914058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.914091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.914382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.914415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.914556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.914590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.914732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.914766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.915029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.915062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.915329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.915361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.915543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.915576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.915798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.915831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.916074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.916107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.916363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.916395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.916619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.916664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.916853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.916886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.917055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.917087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.917194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.917225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.917424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.917457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.917653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.917688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.917931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.917964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.918138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.918171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.918343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.918376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.918557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.918589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.918779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.918814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.882 [2024-10-15 13:07:26.919041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.882 [2024-10-15 13:07:26.919075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.882 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.919267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.919298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.919414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.919447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.919715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.919748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.919990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.920022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.920152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.920186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.920323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.920355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.920639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.920672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.920862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.920895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.921067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.921101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.921230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.921263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.921586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.921630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.921868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.921901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.922176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.922209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.922402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.922434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.922623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.922658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.922830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.922902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.923085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.923155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.923293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.923330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.923513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.923546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.923726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.923760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.923994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.924155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.924314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.924464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.924693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.924909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.924941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.925113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.925146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.925394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.925427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.925541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.925574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.925848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.925882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.925997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.926029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.883 [2024-10-15 13:07:26.926140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.883 [2024-10-15 13:07:26.926172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.883 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.926278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.926311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.926498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.926531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.926646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.926681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.926852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.926885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.927069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.927102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.927373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.927405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.927573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.927613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.927800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.927833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.927944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.927975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.928091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.928123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.928299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.928337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.928473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.928506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.928648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.928683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.928790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.928823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.929863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.929895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.930007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.930039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.930213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.930246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.930422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.930455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.930580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.930627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.930736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.930771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.931069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.931277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.931423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.931648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.931791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.931971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.932003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.884 [2024-10-15 13:07:26.932176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.884 [2024-10-15 13:07:26.932209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.884 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.932328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.932360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.932546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.932579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.932696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.932730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.932839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.932872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.933106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.933277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.933507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.933659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.933865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.933999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.934149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.934366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.934533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.934757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.934901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.934935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.935053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.935086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.935344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.935378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.935498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.935531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.935731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.935765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.935974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.936217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.936421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.936572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.936795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.936944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.936977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.937104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.937137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.937262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.937295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.937485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.937518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.937627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.937662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.937869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.937903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-10-15 13:07:26.938082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-10-15 13:07:26.938115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.938216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.938249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.938376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.938419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.938592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.938634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.938762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.938795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.938911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.938944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.939119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.939153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.939330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.939364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.939552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.939585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.939777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.939811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.939939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.939973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.940146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.940179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.940289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.940322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.940442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.940476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.940626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.940661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.940839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.940871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.941837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.941869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.942946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.942979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.943088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.943122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.943371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.943408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-10-15 13:07:26.943536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-10-15 13:07:26.943568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.943758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.943797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.944906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.944940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.945102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.945332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.945478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.945687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.945832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.945967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.946132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.946271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.946479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.946645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.946801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.946834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.947078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.947111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.947235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.947269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.947412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.947444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.947620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.947654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.947765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.947798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.948001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.948033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.948165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-10-15 13:07:26.948198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-10-15 13:07:26.948331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.948370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.948544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.948577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.948837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.948870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.948984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.949017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.949141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.949174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.949283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.949316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.949495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.949529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.949810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.949845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.949970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.950181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.950339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.950499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.950675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.950861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.950893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.951047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.951260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.951420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.951655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.951866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.951992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.952025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.952161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.952193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.952457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.952490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.952779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.952814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.953035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.953177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.953401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.953547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.953725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.953983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.954018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.954203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.954235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.954348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.954381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.954624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-10-15 13:07:26.954659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-10-15 13:07:26.954787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.954820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.954950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.954983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.955143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.955283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.955490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.955651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.955804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.955993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.956130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.956281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.956439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.956675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.956838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.956871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.957919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.957952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.958141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.958174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.958282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.958315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.958433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.958466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.958578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.958627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.958818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.958851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.959032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-10-15 13:07:26.959066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-10-15 13:07:26.959204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.959237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.959454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.959487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.959657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.959692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.959809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.959842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.959956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.959990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.960107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.960140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.960313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.960347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.960448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.960482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.960758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.960794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.960976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.961008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.961214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.961247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.961372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.961405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.961508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.961541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.961814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.961849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.962963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.962997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.963117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.963150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.963326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.963359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.963550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.963583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.963774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.963807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.963937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.963973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.964165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.964199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.964394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.964427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.964608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.964642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-10-15 13:07:26.964837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-10-15 13:07:26.964870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.964971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.965957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.965989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.966130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.966344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.966511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.966715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.966862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.966974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.967129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.967289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.967440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.967594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.967809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.967843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.968878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.968909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.969920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.969952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.970193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.891 [2024-10-15 13:07:26.970226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-10-15 13:07:26.970351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.970383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.970555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.970588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.970797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.970830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.970957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.970988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.971112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.971152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.971280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.971313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.971423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.971456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.971569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.971611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.971788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.971819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.972043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.972248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.972407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.972561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.972715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.972968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.973133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.973351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.973512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.973748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.973956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.973990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.974108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.974140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.974279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.974312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.974487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.974522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.974707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.974740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.974922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.974954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.975091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.975123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.975255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.975288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.975408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.892 [2024-10-15 13:07:26.975439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-10-15 13:07:26.975642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.975677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.975859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.975892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.976850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.976882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.977896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.977929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.978948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.978985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.979093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.979124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.979234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.979266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.979369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.979400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.979576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.893 [2024-10-15 13:07:26.979629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.893 qpair failed and we were unable to recover it. 00:27:06.893 [2024-10-15 13:07:26.979760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.979793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.979895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.979926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.980095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.980127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.980328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.980361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.980494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.980527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.980645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.980679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.980808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.980839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.981963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.981997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.982169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.982203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.982378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.982410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.982519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.982552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.982690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.982723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.982966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.982999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.983125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.983157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.983343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.983377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.983500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.983532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.983713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.983749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.983933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.983966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.984149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.984184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.984311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.984345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.984517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.984549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.984759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.984793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.985034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.894 [2024-10-15 13:07:26.985067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.894 qpair failed and we were unable to recover it. 00:27:06.894 [2024-10-15 13:07:26.985184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.985216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.985397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.985431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.985551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.985583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.985697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.985729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.985901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.985939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.986062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.986094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.986286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.986318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.986492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.986525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.986717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.986752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.986871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.986903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.987926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.987958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.988228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.988260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.988380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.988413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.988543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.988576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.988722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.988755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.988968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.989263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.989398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.989562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.989731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.989958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.989989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.990184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.990216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.990401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.990435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.990625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.990659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.990781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.990813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.990990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.991021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.991199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.991231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.991421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.895 [2024-10-15 13:07:26.991456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.895 qpair failed and we were unable to recover it. 00:27:06.895 [2024-10-15 13:07:26.991642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.991677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.991797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.991828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.992036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.992281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.992433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.992675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.992886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.992995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.993129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.993404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.993543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.993756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.993925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.993959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.994906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.994938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.995112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.995143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.995324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.995357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.995531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.995564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.995698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.995752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.995866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.995898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.996117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.996149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.996335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.996367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.996556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.996589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.996728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.996759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.996938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.996970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.997080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.997113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.997374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.997406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.997592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.997638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.997745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.997776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.997959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.997991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.998176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.896 [2024-10-15 13:07:26.998208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.896 qpair failed and we were unable to recover it. 00:27:06.896 [2024-10-15 13:07:26.998346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.998379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.998493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.998525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.998831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.998867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.999059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.999091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.999331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.999365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.999468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.999499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.999674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.999707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:26.999887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:26.999918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.000044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.000076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.000194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.000227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.000344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.000376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.000585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.000640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.000783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.000815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.001002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.001035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.001222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.001254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.001380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.001411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.001597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.001645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.001852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.001886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.002949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.002981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.003151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.003185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.003386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.003419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.003593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.003635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.003775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.003808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.003927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.003959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.004064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.004096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.004280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.004311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.004426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.004460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.004577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.897 [2024-10-15 13:07:27.004620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.897 qpair failed and we were unable to recover it. 00:27:06.897 [2024-10-15 13:07:27.004796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.004828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.005936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.005970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.006153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.006186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.006371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.006404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.006579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.006622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.006814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.006846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.006967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.007001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.007182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.007215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.007453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.007486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.007751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.007786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.007901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.007932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.008040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.008072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.008182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.008216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.008317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.008348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.008564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.008597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.008806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.008839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.009010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.009042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.009228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.009261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.009366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.009403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.009579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.009621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.009821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.009853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.010037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.010070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.010198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.010229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.010492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.010526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.010725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.010759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.010883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.010914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.011011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.011044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.011249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.898 [2024-10-15 13:07:27.011281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.898 qpair failed and we were unable to recover it. 00:27:06.898 [2024-10-15 13:07:27.011560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.011592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.011725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.011757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.011865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.011897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.012020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.012052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.012234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.012266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.012464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.012496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.012667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.012703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.012942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.012975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.013103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.013137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.013377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.013410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.013536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.013567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.013700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.013733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.013908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.013940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.014116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.014149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.014339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.014370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.014616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.014652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.014826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.014859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.014985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.015018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.015210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.015243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.015354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.015387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.015512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.015545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.015752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.015787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.016062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.016305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.016453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.016611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.016766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.016983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.017016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.017201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.017234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.899 [2024-10-15 13:07:27.017357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.899 [2024-10-15 13:07:27.017391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.899 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.017501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.017538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.017713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.017747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.017888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.017920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.018116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.018149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.018268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.018299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.018437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.018469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.018723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.018759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.018886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.018918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.019127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.019161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.019333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.019366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.019538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.019571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.019696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.019731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.019837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.019870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.020138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.020170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.020418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.020451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.020644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.020679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.020816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.020849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.021065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.021272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.021430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.021565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.021817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.021985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.022018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.022158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.022191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.022434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.022465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.022720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.022755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.023016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.023048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.023184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.023217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.023424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.023456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.023634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.023668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.023855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.023889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.024015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.024048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.024182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.024214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.024423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.024457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.900 [2024-10-15 13:07:27.024644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.900 [2024-10-15 13:07:27.024679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.900 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.024861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.024894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.025940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.025971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.026175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.026208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.026389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.026422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.026529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.026562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.026745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.026779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.027026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.027058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.027320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.027353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.027481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.027514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.027751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.027786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.028004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.028037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.028211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.028243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.028503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.028535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.028731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.028766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.028964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.028997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.029185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.029217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.029392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.029423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.029689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.029723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.029905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.029937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.030175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.030207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.030471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.030503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.030634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.030667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.030850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.030883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.031061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.031095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.031334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.031367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.031561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.031594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.031849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.031881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.032046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.032119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.032340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.901 [2024-10-15 13:07:27.032376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.901 qpair failed and we were unable to recover it. 00:27:06.901 [2024-10-15 13:07:27.032562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.032596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.032829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.032864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.033040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.033074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.033220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.033253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.033498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.033530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.033719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.033754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.033945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.033979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.034117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.034150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.034266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.034300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.034542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.034574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.034695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.034726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.034905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.034938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.035144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.035177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.035372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.035405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.035652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.035686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.035947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.035979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.036122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.036154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.036339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.036372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.036561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.036595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.036821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.036854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.037115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.037147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.037355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.037388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.037648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.037682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.037869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.037903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.038036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.038069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.038261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.038300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.038556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.038590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.038735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.038770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.039031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.039063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.039270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.039302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.039413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.039445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.039665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.039699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.039976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.040008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.040180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.040213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.040464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.902 [2024-10-15 13:07:27.040496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.902 qpair failed and we were unable to recover it. 00:27:06.902 [2024-10-15 13:07:27.040696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.040731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.040920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.040952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.041063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.041096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.041314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.041347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.041483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.041515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.041725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.041761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.041975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.042190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.042352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.042588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.042771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.042905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.042937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.043143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.043177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.043304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.043337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.043531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.043564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.043766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.043800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.043975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.044007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.044182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.044220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.044423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.044455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.044630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.044664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.044877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.044909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.045095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.045127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.045253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.045285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.045466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.045499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.045617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.045652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.045865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.045899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.046119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.046152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.046402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.046435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.046702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.046736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.046924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.046956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.047141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.047175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.047320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.047353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.047544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.047576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.047707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.047740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.047870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.903 [2024-10-15 13:07:27.047903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.903 qpair failed and we were unable to recover it. 00:27:06.903 [2024-10-15 13:07:27.048156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.048190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.048321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.048354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.048555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.048587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.048784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.048817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.049087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.049119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.049253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.049287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.049541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.049573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.049813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.049847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.050041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.050074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.050189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.050223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.050496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.050530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.050781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.050816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.050987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.051156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.051316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.051545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.051704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.051930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.051963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.052064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.052098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.052340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.052373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.052594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.052635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.052852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.052885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.053074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.053107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.053311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.053349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.053531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.053564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.053763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.053797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.904 [2024-10-15 13:07:27.054950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.904 [2024-10-15 13:07:27.054982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.904 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.055159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.055191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.055299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.055331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.055503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.055536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.055665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.055699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.055909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.055942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.056129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.056162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.056345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.056377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.056645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.056680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.056877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.056911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.057039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.057072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.057203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.057237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.057413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.057445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.057566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.057608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.057795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.057828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.058088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.058122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.058294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.058327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.058497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.058530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.058744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.058779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.058960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.058993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.059100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.059132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.059369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.059403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.059530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.059563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.059695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.059729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.059969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.060003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.060311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.060345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.060528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.060561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.060799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.060833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.061044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.061078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.061259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.061292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.061496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.061530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.061730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.061765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.061889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.061922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.062096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.062129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.062396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.062429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.062622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.062655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.062766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.905 [2024-10-15 13:07:27.062799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.905 qpair failed and we were unable to recover it. 00:27:06.905 [2024-10-15 13:07:27.062971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.063004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.063181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.063214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.063421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.063454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.063696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.063730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.063848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.063882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.064065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.064098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.064364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.064397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.064536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.064570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.064711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.064746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.064938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.064971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.065241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.065275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.065481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.065514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.065697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.065731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.065970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.066002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.066190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.066223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.066471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.066504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.066634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.066668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.066789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.066822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.067074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.067107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.067354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.067386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.067638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.067672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.067909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.067942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.068117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.068149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.068269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.068307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.068527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.068560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.068830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.068865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.069070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.069102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.069232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.069265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.069446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.069479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.069666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.069700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.069886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.069918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.070096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.070129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.070394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.070426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.070615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.070650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.070790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.070821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.071038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.071070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.071204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.071236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.906 [2024-10-15 13:07:27.071483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.906 [2024-10-15 13:07:27.071515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.906 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.071725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.071760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.071885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.071918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.072127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.072159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.072333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.072365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.072479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.072513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.072687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.072721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.072900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.072932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.073172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.073205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.073379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.073412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.073536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.073569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.073781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.073815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.073989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.074022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.074263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.074296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.074419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.074452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.074651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.074683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.074856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.074888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.075096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.075129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.075257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.075290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.075483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.075515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.075643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.075677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.075853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.075887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.076104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.076137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.076247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.076276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.076700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.076738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.076950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.076985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.077162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.077194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.077366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.077405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.077695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.077730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.077923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.077956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.078156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.078188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.078462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.078495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.078619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.078653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.078931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.078964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.079167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.079200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.079423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.079456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.079723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.079757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.907 qpair failed and we were unable to recover it. 00:27:06.907 [2024-10-15 13:07:27.080048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.907 [2024-10-15 13:07:27.080080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.080297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.080330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.080522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.080553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.080786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.080822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.081069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.081102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.081290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.081322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.081513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.081545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.081866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.081900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.082083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.082115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.082357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.082390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.082529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.082563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.082780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.082814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.082998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.083031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.083224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.083257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.083384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.083417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.083658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.083693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.083884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.083918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.084039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.084082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.084300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.084332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.084549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.084583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.084895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.084929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.085056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.085089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.085354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.085386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.085652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.085686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.085861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.085894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.086029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.086062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.086255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.086290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.086482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.086514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.086718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.086752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.086934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.086966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.087147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.087181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.087374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.087406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.087598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.087642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.908 qpair failed and we were unable to recover it. 00:27:06.908 [2024-10-15 13:07:27.087846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.908 [2024-10-15 13:07:27.087879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.087995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.088028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.088155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.088188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.088433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.088466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.088732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.088766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.088902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.088934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.089060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.089093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.089373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.089406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.089704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.089737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.089975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.090008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.090136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.090169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.090415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.090447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.090670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.090706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.090893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.090926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.091172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.091205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.091385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.091417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.091616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.091652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.091835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.091867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.091990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.092024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.092160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.092193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.092440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.092473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.092647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.092681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.092927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.092961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.093132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.093165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.093368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.093400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.093535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.093578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.093720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.093753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.094012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.094045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.094296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.094329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.094502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.094536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.094801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.094835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.095081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.095113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.095297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.095331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.095456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.095489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.095620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.095653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.095895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.095928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.096103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.096136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.096249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.909 [2024-10-15 13:07:27.096281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.909 qpair failed and we were unable to recover it. 00:27:06.909 [2024-10-15 13:07:27.096389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.096421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.096690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.096726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.096970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.097003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.097184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.097217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.097351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.097384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.097633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.097667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.097840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.097872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.098057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.098090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.098326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.098359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.098541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.098573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.098712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.098745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.098922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.098955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.099194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.099227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.099343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.099375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.099579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.099626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.099766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.099798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.099908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.099941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.100132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.100165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.100296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.100328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.100565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.100597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.100860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.100893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.101097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.101129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.101312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.101346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.101562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.101596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.101722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.101756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.101934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.101967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.102207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.102240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.102433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.102465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.102706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.102779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.102935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.102973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.103211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.103244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.103482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.103516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.103621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.103656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.103758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.103791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.104074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.104108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.104290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.104323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.104565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.104598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.104861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.104895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.105010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.105042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.105148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.910 [2024-10-15 13:07:27.105180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.910 qpair failed and we were unable to recover it. 00:27:06.910 [2024-10-15 13:07:27.105348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.105381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.105572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.105626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.105829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.105861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.106044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.106077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.106320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.106353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.106477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.106509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.106717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.106752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.106991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.107024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.107147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.107179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.107443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.107475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.107609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.107642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.107880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.107913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.108028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.108060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.108235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.108268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.108376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.108406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.108622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.108655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.108771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.108803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.109060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.109092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.109282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.109316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.109556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.109589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.109749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.109782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.109990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.110023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.110175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.110206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.110396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.110429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.110626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.110663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.110839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.110871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.111051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.111084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.112466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.112520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.112753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.112790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.113034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.113067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.113286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.113318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.113523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.113555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.113803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.113838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.114017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.114050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.114257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.114290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.114474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.114506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.114692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.114729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.114940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.114972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.115162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.115195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.911 [2024-10-15 13:07:27.115452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.911 [2024-10-15 13:07:27.115486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.911 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.115729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.115763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.115879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.115918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.116138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.116171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.116412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.116445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.116668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.116702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.116890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.116922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.117146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.117310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.117514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.117683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.117845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.117970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.118003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.118216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.118249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.118445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.118479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.118653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.118689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.118886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.118918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.119115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.119149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.119387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.119420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.119609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.119644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.119835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.119868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.120053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.120087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.120259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.120291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.120461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.120494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.120737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.120771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.120905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.120938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.121179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.121212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.121343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.121376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.121554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.121587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1372512 Killed "${NVMF_APP[@]}" "$@" 00:27:06.912 [2024-10-15 13:07:27.121722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.121757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.121870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.121903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.122085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.122118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.122231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.122264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:06.912 [2024-10-15 13:07:27.122389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.122422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.122685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.122720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.122899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.122932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:06.912 [2024-10-15 13:07:27.123069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.123102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.123293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.123326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.912 [2024-10-15 13:07:27.123452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.123486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.912 [2024-10-15 13:07:27.123684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.123717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.123851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.123884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.912 [2024-10-15 13:07:27.124055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.912 [2024-10-15 13:07:27.124087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.912 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.124274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.124308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.124492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.124525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.124707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.124741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.124932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.124965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.125145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.125179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.125351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.125384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.125502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.125535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.125720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.125755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.125862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.125895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.126075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.126107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.126277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.126310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.126489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.126522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.126657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.126693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.126959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.126993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.127183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.127216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.127340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.127375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.127639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.127674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.127848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.127882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.127999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.128032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.128216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.128248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.128505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.128537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.128733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.128768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.128963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.128995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.129119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.129152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.129344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.129381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.129512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.129545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.129756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.129790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.129979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.130012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.130146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.130179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1373448 00:27:06.913 [2024-10-15 13:07:27.130365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.130398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.130509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.130542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1373448 00:27:06.913 [2024-10-15 13:07:27.130755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:06.913 [2024-10-15 13:07:27.130790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.130927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.130960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.131074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.131105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.131303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.131336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1373448 ']' 00:27:06.913 [2024-10-15 13:07:27.131535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.131569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.131829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.913 [2024-10-15 13:07:27.131864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.132063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.132095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.132275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.913 [2024-10-15 13:07:27.132308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.913 [2024-10-15 13:07:27.132493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.913 [2024-10-15 13:07:27.132526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.913 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.132699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.132733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.914 [2024-10-15 13:07:27.132855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.132888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.133063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.133097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.914 [2024-10-15 13:07:27.133270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.133303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.133421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.133454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.133572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.133614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.914 [2024-10-15 13:07:27.133801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.133840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.134059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.134093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.134212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.134249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.134457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.134491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.134592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.134636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.134846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.134880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.135099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.135132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.135313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.135346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.135523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.135555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.135740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.135776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.135976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.136009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.136123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.136155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.136421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.136453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.136694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.136729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.136919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.136952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.137134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.137167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.137343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.137377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.137585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.137630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.137739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.137771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.137955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.137988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.138159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.138191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.138374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.138407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.138648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.138682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.138894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.138928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.139119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.139153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.139412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.139445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.139705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.139739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.139916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.139988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.140194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.140231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.914 [2024-10-15 13:07:27.140485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.914 [2024-10-15 13:07:27.140521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.914 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.140709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.140753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.140942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.140975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.141096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.141129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.141254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.141288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.141480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.141513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.141695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.141731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.141871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.141906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.142101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.142144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.142275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.142310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.142491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.142523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.142789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.142835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.143048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.143083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.143191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.143224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.143431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.143466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.143730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.143768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.143907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.143940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.144193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.144228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.144428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.144461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.144751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.144791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.144973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.145007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.145196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.145232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.145366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.145401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.145528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.145560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.145762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.145800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.145999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.146171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.146416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.146581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.146740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.146887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.146920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.147097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.147133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.147323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.147355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.147532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.147565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.147862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.147900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.148096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.148129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.148367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.148400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.148583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.148627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.148812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.148846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.149028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.149062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.149247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.149279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.149387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.915 [2024-10-15 13:07:27.149420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.915 qpair failed and we were unable to recover it. 00:27:06.915 [2024-10-15 13:07:27.149523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.149555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.149822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.149856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.149964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.149997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.150247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.150280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.150464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.150496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.150670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.150706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.150972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.151006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.151125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.151159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.151397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.151430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.151536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.151575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.151771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.151805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.152071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.152105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.152362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.152396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.152519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.152551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.152741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.152775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.152961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.152995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.153235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.153267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.153506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.153540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.153755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.153789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.153983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.154016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.154149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.154182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.154445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.154479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.154745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.154780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.154959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.154992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.155261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.155294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.155518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.155551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.155673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.155708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.155845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.155877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.156017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.156050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.156312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.156344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.156524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.156556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.156674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.156707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.156907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.156940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.157126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.157158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.157333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.157367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.157504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.157537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.157850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.157922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.158203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.916 [2024-10-15 13:07:27.158240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.916 qpair failed and we were unable to recover it. 00:27:06.916 [2024-10-15 13:07:27.158427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.158461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.158562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.158594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.158882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.158917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.159075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.159108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.159367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.159399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.159622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.159656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.159925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.159957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.160204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.160237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.160369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.160402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.160645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.160680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.160945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.160977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.161239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.161271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.161399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.161433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.161636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.161670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.161881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.161914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.162216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.162248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.162465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.162497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.162621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.162657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.162845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.162878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.163010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.163043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.163237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.163271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.163463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.163496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.163680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.163714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.163851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.163884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.164082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.164115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.164233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.164271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.164490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.164522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.164633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.164669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.164929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.164962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.165163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.165195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.165387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.165420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.165593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.165638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.165767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.165800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.166094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.166127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.166365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.166398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.166611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.166645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.166913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.166946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.167080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.167114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.167300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.167334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.167533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.917 [2024-10-15 13:07:27.167567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.917 qpair failed and we were unable to recover it. 00:27:06.917 [2024-10-15 13:07:27.167843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.918 [2024-10-15 13:07:27.167878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.918 qpair failed and we were unable to recover it. 00:27:06.918 [2024-10-15 13:07:27.168132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.918 [2024-10-15 13:07:27.168166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.918 qpair failed and we were unable to recover it. 00:27:06.918 [2024-10-15 13:07:27.168380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.918 [2024-10-15 13:07:27.168413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.918 qpair failed and we were unable to recover it. 00:27:06.918 [2024-10-15 13:07:27.168548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.918 [2024-10-15 13:07:27.168582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.918 qpair failed and we were unable to recover it. 00:27:06.918 [2024-10-15 13:07:27.168730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.918 [2024-10-15 13:07:27.168764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:06.918 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.168902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.168935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.169105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.169139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.169321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.169355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.169596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.169639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.169754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.169788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.169889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.169922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.170052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.170084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.233 qpair failed and we were unable to recover it. 00:27:07.233 [2024-10-15 13:07:27.170260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.233 [2024-10-15 13:07:27.170292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.170475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.170509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.170754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.170789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.170989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.171021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.171267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.171301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.171508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.171540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.171756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.171790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.171990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.172022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.172135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.172168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.172357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.172390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.172580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.172622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.172799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.172832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.173001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.173033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.173287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.173320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.173494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.173532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.173714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.173749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.173929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.173962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.174139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.174172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.174370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.174402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.174591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.174635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.174877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.174910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.175094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.175126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.175252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.175285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.175440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.175472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.175645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.175679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.175922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.175955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.176129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.176162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.176375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.176408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.176614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.176648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.176846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.176880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.177093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.177112] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:27:07.234 [2024-10-15 13:07:27.177126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.177152] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.234 [2024-10-15 13:07:27.177278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.177309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.177499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.177530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.177797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.177828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.178017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.178049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.178186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.178218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.178473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.178505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.178699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.178733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.178837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.178870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.179116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.234 [2024-10-15 13:07:27.179149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.234 qpair failed and we were unable to recover it. 00:27:07.234 [2024-10-15 13:07:27.179385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.179434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.179729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.179767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.180020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.180055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.180274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.180307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.180479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.180511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.180633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.180669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.180781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.180812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.181000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.181035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.181248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.181282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.181417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.181450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.181651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.181687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.181941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.181976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.182145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.182177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.182305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.182338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.182453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.182486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.182767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.182811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.182930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.182962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.183140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.183285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.183456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.183632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.183852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.183969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.184002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.184265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.184299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.184437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.184469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.184696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.184729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.184846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.184878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.185066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.185098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.185204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.185235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.185416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.185447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.185643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.185677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.185925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.185957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.186143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.186177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.186449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.186481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.186621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.186656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.186834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.186870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.187113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.187146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.187383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.187418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.187610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.187646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.187830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.235 [2024-10-15 13:07:27.187862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.235 qpair failed and we were unable to recover it. 00:27:07.235 [2024-10-15 13:07:27.187989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.188027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.188267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.188301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.188514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.188547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.188826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.188861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.188999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.189032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.189156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.189188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.189398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.189431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.189676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.189711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.189838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.189870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.189993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.190026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.190139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.190169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.190377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.190410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.190609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.190644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.190750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.190784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.190965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.191000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.191170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.191202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.191461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.191493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.191670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.191703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.191879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.191912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.192118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.192320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.192516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.192725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.192885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.192987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.193019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.193257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.193288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.193392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.193423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.193672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.193707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.193889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.193922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.194109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.194143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.194398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.194430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.194636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.194672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.194793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.194824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.195007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.195040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.195281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.195314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.195492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.195525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.195777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.195811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.196061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.196093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.196360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.196393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.196580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.236 [2024-10-15 13:07:27.196620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.236 qpair failed and we were unable to recover it. 00:27:07.236 [2024-10-15 13:07:27.196836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.196882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.197076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.197108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.197290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.197324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.197437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.197470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.197594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.197659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.197834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.197868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.198109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.198141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.198254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.198285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.198402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.198435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.198545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.198576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.198762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.198796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.199040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.199073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.199270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.199300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.199420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.199453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.199561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.199593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.199792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.199826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.200108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.200140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.200335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.200368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.200470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.200501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.200677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.200713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.200853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.200888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.201106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.201139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.201334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.201368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.201570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.201612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.201729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.201764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.202008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.202042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.202295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.202330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.202522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.202557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.202843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.202878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.202987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.203020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.203213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.203246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.203438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.203470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.203710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.203746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.203923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.203958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.204086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.204119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.204260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.204293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.204481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.204512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.204684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.204718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.204983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.205017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.237 [2024-10-15 13:07:27.205204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-10-15 13:07:27.205238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.237 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.205489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.205531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.205783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.205828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.206011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.206042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.206216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.206249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.206428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.206460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.206729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.206764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.206895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.206940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.207058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.207089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.207351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.207388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.207498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.207529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.207701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.207734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.207916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.207949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.208085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.208120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.208233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.208268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.208522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.208558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.208759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.208792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.208964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.208996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.209214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.209246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.209509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.209541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.209678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.209716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.209843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.209876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.210070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.210105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.210277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.210311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.210420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.210452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.210705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.210739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.210940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.210973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.211101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.211136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.211396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.211434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.211642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.211677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.211868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.211903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.212078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.212111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.212287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.212320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.212509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.212542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.212793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-10-15 13:07:27.212831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.238 qpair failed and we were unable to recover it. 00:27:07.238 [2024-10-15 13:07:27.213074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.213107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.213353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.213388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.213630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.213667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.213860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.213897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.214163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.214195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.214387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.214420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.214635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.214677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.214863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.214894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.215147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.215181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.215396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.215431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.215624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.215660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.215848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.215881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.216055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.216088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.216314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.216352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.216546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.216577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.216775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.216810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.216991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.217024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.217264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.217297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.217543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.217581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.217914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.217947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.218060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.218092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.218284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.218318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.218424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.218457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.218643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.218679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.218879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.218912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.219133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.219166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.219408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.219441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.219635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.219670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.219910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.219943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.220069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.220100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.220301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.220332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.220451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.220485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.220680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.220714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.220983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.221016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.221138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.221170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.221380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.221413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.221523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.221559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.221843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.221879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.222076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.222110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.222298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.222331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.239 qpair failed and we were unable to recover it. 00:27:07.239 [2024-10-15 13:07:27.222447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-10-15 13:07:27.222480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.222739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.222775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.222967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.223000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.223202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.223234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.223405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.223437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.223619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.223652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.223783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.223823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.224068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.224101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.224228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.224260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.224444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.224477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.224739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.224772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.224950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.224981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.225162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.225194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.225298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.225331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.225500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.225532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.225652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.225685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.225952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.225986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.226109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.226141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.226334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.226367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.226551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.226583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.226842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.226876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.227063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.227095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.227279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.227312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.227526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.227558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.227796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.227831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.228007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.228038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.228228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.228260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.228386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.228419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.228612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.228646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.228887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.228920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.229107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.229140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.229379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.229412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.229590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.229635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.229826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.229857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.229991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.230023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.230139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.230173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.230384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.230416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.230586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.230628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.230807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.230839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.231030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.231063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.240 [2024-10-15 13:07:27.231245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.240 [2024-10-15 13:07:27.231279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.240 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.231502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.231534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.231775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.231810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.231991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.232025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.232216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.232250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.232460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.232493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.232730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.232770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.232951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.232982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.233151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.233182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.233445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.233477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.233599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.233640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.233762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.233796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.233916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.233947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.234144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.234176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.234298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.234330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.234452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.234484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.234655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.234695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.234903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.234936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.235128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.235161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.235373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.235406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.235643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.235678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.235880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.235914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.236177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.236217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.236397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.236430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.236645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.236680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.236849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.236883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.237060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.237092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.237331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.237365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.237539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.237573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.237711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.237744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.238039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.238241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.238485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.238644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.238781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.238982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.239015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.239188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.239221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.239457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.239490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.239732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.239767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.239966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.239999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.240276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.241 [2024-10-15 13:07:27.240310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.241 qpair failed and we were unable to recover it. 00:27:07.241 [2024-10-15 13:07:27.240494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.240527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.240643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.240677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.240868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.240901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.241083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.241116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.241301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.241333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.241533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.241580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.241884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.241934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.242151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.242186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.242313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.242348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.242563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.242596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.242790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.242824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.242995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.243028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.243278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.243312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.243425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.243458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.243593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.243640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.243869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.243902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.244075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.244108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.244283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.244316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.244599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.244647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.244898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.244935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.245056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.245089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.245218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.245251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.245383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.245416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.245614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.245648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.245854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.245887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.246086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.246119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.246300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.246332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.246588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.246632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.246876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.246909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.247032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.247065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.247320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.247354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.247544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.247577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.247847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.247918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.242 [2024-10-15 13:07:27.248121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.242 [2024-10-15 13:07:27.248166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.242 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.248412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.248448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.248580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.248626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.248757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.248798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.249031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.249062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.249247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.249279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.249528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.249566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.249781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.249820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.250122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.250157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.250260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.243 [2024-10-15 13:07:27.250371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.250408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.250593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.250656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.250846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.250880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.251022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.251063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.251206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.251240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.251380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.251412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.251650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.251685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.251883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.251916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.252050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.252083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.252344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.252377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.252665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.252700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.252883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.252917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.253220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.253253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.253503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.253535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.253727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.253763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.253953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.253986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.254274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.254318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.254610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.254645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.254841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.254875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.255119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.255152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.255359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.255393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.255581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.255623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.255804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.255838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.256078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.256112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.256315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.256349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.256621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.256656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.256793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.256827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.257009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.257041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.257300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.257334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.257471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.257505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.257770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.257807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.243 [2024-10-15 13:07:27.257974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.243 [2024-10-15 13:07:27.258008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.243 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.258278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.258313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.258498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.258532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.258714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.258749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.259013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.259047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.259243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.259276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.259564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.259609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.259896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.259931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.260145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.260180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.260365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.260400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.260583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.260626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.260770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.260804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.261012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.261070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.261347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.261388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.261518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.261552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.261684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.261720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.261976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.262010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.262215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.262248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.262430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.262464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.262597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.262647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.262830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.262864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.263187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.263220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.263435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.263469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.263591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.263635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.263843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.263876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.264138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.264177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.264484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.264518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.264715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.264750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.264965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.264998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.265131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.265174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.265422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.265456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.265704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.265739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.265928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.265962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.266210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.266243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.266460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.266493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.266693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.266729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.266998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.267031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.267149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.267182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.267363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.267395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.267608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.267643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.267917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.244 [2024-10-15 13:07:27.267951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.244 qpair failed and we were unable to recover it. 00:27:07.244 [2024-10-15 13:07:27.268124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.268157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.268342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.268376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.268643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.268678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.268821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.268853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.269050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.269084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.269351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.269384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.269678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.269712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.269968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.270000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.270182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.270215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.270375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.270409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.270671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.270705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.270987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.271030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.271305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.271350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.271596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.271639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.271810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.271843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.272089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.272123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.272311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.272344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.272584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.272624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.272866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.272899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.273083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.273117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.273355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.273389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.273645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.273680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.273918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.273951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.274084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.274117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.274375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.274408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.274615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.274650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.274896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.274930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.275134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.275167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.275434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.275469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.275756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.275792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.276062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.276094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.276364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.276396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.276532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.276565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.276764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.276797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.276988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.277021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.277230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.277264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.277502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.277534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.277718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.277754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.278001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.278037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.278276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.245 [2024-10-15 13:07:27.278309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.245 qpair failed and we were unable to recover it. 00:27:07.245 [2024-10-15 13:07:27.278496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.278529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.278767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.278802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.278924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.278957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.279173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.279206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.279470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.279505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.279697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.279732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.279946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.279980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.280243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.280276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.280465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.280498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.280783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.280823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.281094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.281130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.281399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.281438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.281710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.281743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.282020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.282053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.282338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.282372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.282643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.282679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.282962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.282996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.283177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.283211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.283451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.283483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.283721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.283755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.284051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.284086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.284373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.284407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.284674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.284709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.284831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.284863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.285128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.285161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.285407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.285441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.285623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.285656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.285896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.285930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.286105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.286138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.286422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.286455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.286646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.286680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.286804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.286837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.287014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.287047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.287286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.287319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.287621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.287657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.287796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.246 [2024-10-15 13:07:27.287829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.246 qpair failed and we were unable to recover it. 00:27:07.246 [2024-10-15 13:07:27.288044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.288079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.288189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.288222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.288491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.288525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.288721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.288755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.289010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.289044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.289330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.289364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.289632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.289667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.289804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.289838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.290099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.290133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.290424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.290457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.290726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.290762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.291029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.291061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.291254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.291288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.291541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.291576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.291835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.291870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.292067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.292107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.292292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.292325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.292566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.292599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.292695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.247 [2024-10-15 13:07:27.292721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.247 [2024-10-15 13:07:27.292729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.247 [2024-10-15 13:07:27.292736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.247 [2024-10-15 13:07:27.292741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.247 [2024-10-15 13:07:27.292900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.292932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.293123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.293154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.293444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.293477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.293652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.293687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.293861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.293894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.294175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.294208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.294325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:07.247 [2024-10-15 13:07:27.294445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.294479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.294414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:07.247 [2024-10-15 13:07:27.294524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.247 [2024-10-15 13:07:27.294630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.294525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:07.247 [2024-10-15 13:07:27.294664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.294867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.294899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.295179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.295212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.295456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.295490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.295663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.295699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.295937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.295971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.296100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.296133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.296315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.296349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.296589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.296631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.296847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.296880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.297095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.247 [2024-10-15 13:07:27.297127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.247 qpair failed and we were unable to recover it. 00:27:07.247 [2024-10-15 13:07:27.297413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.297447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.297620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.297653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.297840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.297873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.298088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.298120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.298233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.298266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.298485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.298519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.298785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.298825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.299108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.299142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.299405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.299439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.299637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.299671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.299925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.299957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.300225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.300258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.300429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.300462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.300637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.300671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.300860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.300894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.301085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.301116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.301351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.301394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.301575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.301618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.301860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.301894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.302078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.302111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.302369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.302404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.302591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.302633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.302871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.302904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.303028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.303062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.303298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.303330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.303608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.303643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.303921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.303967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.304226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.304261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.304457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.304491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.304749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.304792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.305082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.305118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.305314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.305348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.305588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.305632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.305769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.305805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.305993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.306028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.306232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.306267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.306460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.306495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.306678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.306714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.306954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.306988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.307117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.307150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.307391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.307427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.307701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.307737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.248 qpair failed and we were unable to recover it. 00:27:07.248 [2024-10-15 13:07:27.308003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.248 [2024-10-15 13:07:27.308037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.308329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.308364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.308645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.308681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.308924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.308959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.309085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.309120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.309360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.309395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.309637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.309673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.309897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.309931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.310103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.310137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.310382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.310415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.310548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.310581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.310776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.310810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.311094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.311128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.311394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.311427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.311670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.311726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.311988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.312023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.312255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.312291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.312556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.312589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.312846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.312881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.313132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.313167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.313417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.313450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.313657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.313692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.313882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.313916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.314039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.314073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.314314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.314348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.314635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.314671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.314979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.315014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.315292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.315326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.315550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.315584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.315841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.315877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.316065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.316099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.316282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.316316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.316578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.316623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.316867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.316903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.317155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.317191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.317438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.317475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.317650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.317686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.317904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.317938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.318129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.318164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.318423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.318459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.318740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.318777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.319048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.319088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.319347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.319381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.249 [2024-10-15 13:07:27.319585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.249 [2024-10-15 13:07:27.319628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.249 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.319894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.319929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.320205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.320240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.320519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.320554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.320803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.320837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.321040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.321074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.321265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.321298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.321485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.321519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.321758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.321795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.321996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.322029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.322267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.322302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.322490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.322524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.322802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.322839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.323106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.323140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.323422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.323458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.323664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.323698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.323991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.324025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.324292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.324328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.324441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.324475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.324740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.324777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.324992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.325027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.325204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.325239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.325444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.325478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.325743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.325779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.326019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.326053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.326246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.326287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.326502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.326535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.326715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.326750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.327020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.327054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.327245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.327278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.327533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.327568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.327809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.327861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.328118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.328152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.328434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.328468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.328591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.328636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.328900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.328934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.329124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.329158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.329347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.329380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.329664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.329700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.329970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.250 [2024-10-15 13:07:27.330003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.250 qpair failed and we were unable to recover it. 00:27:07.250 [2024-10-15 13:07:27.330245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.330278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.330545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.330579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.330832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.330866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.331079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.331112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.331376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.331410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.331593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.331644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.331889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.331923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.332134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.332167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.332354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.332388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.332647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.332681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.332870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.332904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.333074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.333107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.333377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.333416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.333690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.333723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.333999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.334032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.334304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.334337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.334622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.334657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.334899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.334933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.335140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.335172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.335439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.335472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.335731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.335770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.335956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.335999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.336189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.336223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.336402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.336436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.336721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.336758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.337029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.337065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.337283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.337318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.337541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.337577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.337834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.337872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.338162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.338198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.338485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.338520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.338654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.338691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.338953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.338989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.339261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.339295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.339436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.339471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.339662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.339698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.339881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.339914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.340119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.340152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.340413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.340446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.340694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.340728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.340983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.341016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.341199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.341232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.341417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.341449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.341686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.251 [2024-10-15 13:07:27.341721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.251 qpair failed and we were unable to recover it. 00:27:07.251 [2024-10-15 13:07:27.341953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.341987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.342231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.342264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.342557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.342589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.342811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.342846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.343084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.343117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.343295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.343327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.343621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.343657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.343770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.343800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.344036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.344075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.344283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.344316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.344596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.344640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.344851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.344884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.345148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.345182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.345421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.345454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.345722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.345757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.346028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.346062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.346254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.346288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.346538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.346572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff118000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.346794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.346858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.347145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.347200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.347471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.347504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.347773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.347808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.347945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.347978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.348171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.348204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.348443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.348476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.348743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.348777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.349061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.349094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.349364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.349398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.349684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.349719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.349934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.349967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.350147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.350180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.350418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.350451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.350694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.350729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.350942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.350975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.351166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.351199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.351477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.351517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.351631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.351664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.351972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.352006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.352212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.352246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.352515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.352548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.352834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.352868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.353059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.353093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.353358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.353391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.353673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.252 [2024-10-15 13:07:27.353708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.252 qpair failed and we were unable to recover it. 00:27:07.252 [2024-10-15 13:07:27.353899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.353933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.354154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.354186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.354379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.354412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.354617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.354652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.354891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.354924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.355218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.355252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.355463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.355496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.355701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.355736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.355998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.356032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.356182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.356215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.356503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.356537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.356736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.356771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.356987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.357020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.357210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.357243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.357451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.357485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.357723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.357757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.357958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.357990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.358260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.358294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.358511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.358544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.358677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.358712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.358884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.358917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.359188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.359221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.359410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.359444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.359703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.359737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.360024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.360057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.360188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.360221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.360484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.360516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.360727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.360761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.361048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.361082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.361345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.361378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.361588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.361632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.361901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.361935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0c60 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.362170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.362214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.362475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.362510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.362793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.362828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.363040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.363073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.363323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.363357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.363557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.363590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.363724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.363758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.364033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.364067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.253 [2024-10-15 13:07:27.364263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.253 [2024-10-15 13:07:27.364296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.253 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.364554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.364586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.364783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.364817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.365079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.365113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.365357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.365391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.365527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.365569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.365793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.365826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.366010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.366044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.366262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.366296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.366534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.366567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.366856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.366891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.367129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.367162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.367452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.367485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.367754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.367790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.368054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.368087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.368302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.368335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.368531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.368565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.368814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.368848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.369086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.369119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.369369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.369402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.369690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.369724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.369932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.369965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.370233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.370267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.370472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.370505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.370753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.370787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.371071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.371105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.371376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.371409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.371624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.371659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.371908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.371941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.372158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.372192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.372441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.372475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.372662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.372696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.372966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.373019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.373309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.373344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.373534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.373566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.373845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.373879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.374116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.374148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.374413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.374446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.374561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.374594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.374865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.374899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.254 qpair failed and we were unable to recover it. 00:27:07.254 [2024-10-15 13:07:27.375084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.254 [2024-10-15 13:07:27.375117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.375294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.375326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.375506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.375539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.375846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.375880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.376142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.376175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.376362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.376394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.376592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.376638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.376897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.376930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.377197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.377229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.377468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.377500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.377681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.377717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.377928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.377961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.378166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.378198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.378319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.378352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.378533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.378566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.378770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.378805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.378942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.378976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.379101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.379134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.379309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.379341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.379530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.379563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.379838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.379873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.380129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.380162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.380448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.380481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.380769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.380804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.381043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.381076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.381260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.381293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.381501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.381533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.381654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.381686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.381947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.381979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.382097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.382131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.382337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.382370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.382638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.382672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.382886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.382925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.383138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.383170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.383359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.383392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.383646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.383680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.383892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.383926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.384061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.384094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.384350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.384383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.384670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.384705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.255 [2024-10-15 13:07:27.384965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.255 [2024-10-15 13:07:27.384998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.255 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.385293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.385326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.385592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.385636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.385763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.385796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.386066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.386099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.386227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.386261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.386575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.386617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.386824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.386857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.387095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.387128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.387250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.387283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.387526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.387558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.387799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.387836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.388047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.388080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.388318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.388351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.388564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.388597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.388739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.388772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.388912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.388944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.389181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.389215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.389395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.389428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.256 [2024-10-15 13:07:27.389681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.389717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.389938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:07.256 [2024-10-15 13:07:27.389971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.390162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.390194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:07.256 [2024-10-15 13:07:27.390432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.390465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.256 [2024-10-15 13:07:27.390668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.390702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.256 [2024-10-15 13:07:27.390964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.390998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.391171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.391203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.391392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.391425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.391668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.391708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.391950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.391982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.392243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.392274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff114000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.392597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.392647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.392920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.392954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.393222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.393255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.393551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.393584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.393784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.393822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.394067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.394100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.394213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.394246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.394440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.394473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.394693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.394728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.394918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.394951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.395207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.395241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.395459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.395492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.395662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.395697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-10-15 13:07:27.395907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-10-15 13:07:27.395947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.396281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.396312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.396454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.396487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.396686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.396720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.396959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.396992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.397266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.397298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.397484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.397517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.397703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.397737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.397946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.397979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.398159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.398192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.398408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.398440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.398634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.398670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.398858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.398892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.399084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.399117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.399382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.399416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.399716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.399750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.399933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.399966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.400098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.400130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.400375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.400409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.400651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.400685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.400941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.400973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.401253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.401287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.401594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.401642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.401831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.401865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.402058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.402091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.402374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.402408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.402551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.402584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.402856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.402891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.403079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.403113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.403301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.403334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.403527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.403560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.403713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.403748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.403999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.404032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.404224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.404258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.404442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.404474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.404594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.404638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.404919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.404952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.405244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.405277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.405483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.405516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.405719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.405753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.405951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.405990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.406166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.406198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.406414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.406447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.406730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.406764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.407032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.407064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.407205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.407237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.407483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.407517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.407770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-10-15 13:07:27.407804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-10-15 13:07:27.407993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.408026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.408266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.408298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.408414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.408447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.408719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.408755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.408949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.408981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.409101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.409133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.409377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.409411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.409676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.409710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.409890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.409923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.410063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.410098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.410302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.410334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.410609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.410643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.410852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.410888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.411079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.411111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.411306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.411339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.411549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.411583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.411806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.411838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.411991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.412023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.412318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.412354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.412472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.412511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.412638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.412671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.412862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.412895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.413022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.413054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.413332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.413365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.413649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.413683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.413872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.413905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.414102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.414134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.414432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.414465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.414615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.414648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.414826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.414858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.414986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.415019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.415190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.415222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.415421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.415454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.415657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.415693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.415892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.415926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.416077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.416110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.416394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.416427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.416635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.416668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-10-15 13:07:27.416914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-10-15 13:07:27.416949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.417151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.417185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.417387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.417422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.417695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.417729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.417973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.418006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.418267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.418299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.418487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.418520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.418726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.418760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.418958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.418989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.419120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.419153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.419404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.419437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.419623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.419657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.419838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.419870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.420136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.420169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.420365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.420398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.420583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.420624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.420797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.420830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.421959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.421992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.422179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.422211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.422406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.422439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.422576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.422619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.422739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.422772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.422963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.422996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.423148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.423182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.423429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.423462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.423667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.423701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.423895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.423931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.424067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.424100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.424350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.424383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.424611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.424645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.424796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.424828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.425083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.425115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.425236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.425269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.425449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.425483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.425752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.425786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.425913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.425945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.426078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.426111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.426374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.426408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.426698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.426732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.426991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.427023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.427265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.427297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-10-15 13:07:27.427540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.427574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.259 [2024-10-15 13:07:27.427782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-10-15 13:07:27.427821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.427995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.428028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.260 [2024-10-15 13:07:27.428313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.428348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.260 [2024-10-15 13:07:27.428558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.428590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.260 [2024-10-15 13:07:27.428754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.428788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.429012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.429045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.429257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.429289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.429573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.429615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.429812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.429845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.430027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.430059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.430258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.430290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.430581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.430625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.430813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.430851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.431041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.431072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.431315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.431348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.431453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.431484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.431708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.431743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.431940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.431973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.432147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.432178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.432448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.432481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.432757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.432792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.432912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.432944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.433143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.433176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.433438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.433472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.433666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.433700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.433890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.433923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.434117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.434150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.434399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.434431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.434619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.434652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.434794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.434827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.435017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.435050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.435237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.435269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.435455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.435488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.435710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.435744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.435961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.435993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.436143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.436175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.436423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.436455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.436726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.436760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.436965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.436997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.437134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.437167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.437361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.437394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.437577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.437616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.437829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.437861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.438101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.438135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.438421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.438453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.438643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.438676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.438811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.438844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.439114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.439146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.439408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.439440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.439652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-10-15 13:07:27.439686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-10-15 13:07:27.439869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.439901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.440120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.440154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.440363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.440401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.440584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.440626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.440772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.440805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.440931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.440965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.441153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.441185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.441360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.441392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.441580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.441621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.441753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.441787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.442025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.442058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.442235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.442267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.442449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.442481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.442684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.442719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.442910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.442942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.443161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.443193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.443380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.443413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.443550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.443583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.443836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.443868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.444042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.444074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.444268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.444301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.444506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.444539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.444733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.444766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.445003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.445034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.445251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.445283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.445463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.445496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.445734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.445767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.445958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.445991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.446213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.446244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.446439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.446473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.446646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.446680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.446923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.446955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.447149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.447181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.447419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.447452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.447630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.447663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.447803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.447836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.448023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.448055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.448271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.448304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.448494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.448525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.448715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.448749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.448919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.448951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.449196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.449230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.449417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.449460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.449662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.449696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.449833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.449865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.450106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.450139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.450320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.450351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.450591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.450632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.450824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.450857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-10-15 13:07:27.451035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-10-15 13:07:27.451067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.451261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.451294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.451558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.451590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.451868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.451901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.452038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.452071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.452289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.452321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.452511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.452543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.452708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.452740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.453008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.453041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.453221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.453253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.453510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.453542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.453702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.453736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.454018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.454051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.454239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.454271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.454462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.454493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.454677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.454711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.454889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.454921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.455159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.455190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.455450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.455482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.455739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.455773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.456023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.456056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.456244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.456277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.456461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.456494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.456684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.456717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.456990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.457023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.457215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.457247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.457433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.457465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.457653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.457687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.457882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.457916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.458155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.458189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.458451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.458484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.458695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.458730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.458849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.458881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.459073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.459113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.459394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.459427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.459695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.459728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.460015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.460049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.460239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.460275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.460530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.460563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.460783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.460818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.461058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.461093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.461407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.461441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.461629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.461666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.461858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.461893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.462082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.462115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.462387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.462421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.462622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.462657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.462888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-10-15 13:07:27.462920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-10-15 13:07:27.463162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.463195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.463485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.463517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.463777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.463813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.464108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.464141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.464345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.464378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.464646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.464679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.464801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.464832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.465027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.465060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.465325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.465358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.465529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.465561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.465809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.465843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.466049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.466081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 Malloc0 00:27:07.263 [2024-10-15 13:07:27.466292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.466326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.466563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.466595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.466787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.466820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.263 [2024-10-15 13:07:27.466999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.467031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.467288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.467322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.467621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.467656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.263 [2024-10-15 13:07:27.467847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.467879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.263 [2024-10-15 13:07:27.468097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.468129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.468343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.468377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.468664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.468697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.468914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.468946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.469210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.469242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.469436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.469469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.469662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.469695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.469972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.470005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.470274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.470306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.470512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.470545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.470781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.470814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.471074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.471106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.471404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.471437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.471640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.471674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.471922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.471955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.472221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.472253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.472539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.472571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.472771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.472805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.473076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.473107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.473291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.473323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.473590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.473633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.473854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.263 [2024-10-15 13:07:27.473909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.473942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.474184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.474216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.474481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.474513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.474752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.474786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.474914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.474948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.475156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.475188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.475426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.475458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.475670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.475704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.475888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.475921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.476183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.476215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.476484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.476516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.476792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.476825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.477051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.477085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-10-15 13:07:27.477199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-10-15 13:07:27.477231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.477344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.477376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.477638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.477672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.477944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.477976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.478218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.478250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.478521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.478553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.478817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.478849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.479137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.479170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.479441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.479473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.479762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.479797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.479979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.480017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.480234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.480267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.480505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.480537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.480803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.480836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.481077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.481110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.481319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.481352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.481620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.481653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.481922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.481954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.482238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.482270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.264 [2024-10-15 13:07:27.482543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.482577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.482839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.482872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.264 [2024-10-15 13:07:27.483147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.483179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.264 [2024-10-15 13:07:27.483463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.483496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.264 [2024-10-15 13:07:27.483632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.483667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.483950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.483982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.484248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.484280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.484472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.484505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.484691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.484725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.484836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.484868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.485052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.485084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.485340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.485372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.485639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.485674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.485918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.485950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.486190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.486222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.486502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.486534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.486815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.486850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.487115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.487147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.487349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.487381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.487640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.487674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.487963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.487996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.488255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.488287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.488522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.488554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.488862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.488896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.489146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.489179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.489441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.489473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.489713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.489746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.489985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.490017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.490273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.490307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.264 [2024-10-15 13:07:27.490598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.490640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.490900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.490932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.491148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.491180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.264 [2024-10-15 13:07:27.491392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.491425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.264 [2024-10-15 13:07:27.491665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-10-15 13:07:27.491699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-10-15 13:07:27.491934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.491967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.492157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.492190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.492458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.492492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.492731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.492764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.492899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.492931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.493171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.493204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.493396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.493428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.493672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.493707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.493973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.494005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.494232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.494263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.494552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.494585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.494864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.494897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.495174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.495206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.495396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.495428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.495693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.495728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.495931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.495963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.496211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.496243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.496381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.496413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.496588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.496646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.496911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.496944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.497162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.497199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.497490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.497521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.497789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.497824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.497962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.497994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.498166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.498198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.498335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.498367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.498543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.265 [2024-10-15 13:07:27.498576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-10-15 13:07:27.498762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.498795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.265 [2024-10-15 13:07:27.499039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-10-15 13:07:27.499072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.526 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.526 [2024-10-15 13:07:27.499382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.526 [2024-10-15 13:07:27.499415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.526 qpair failed and we were unable to recover it. 00:27:07.526 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.526 [2024-10-15 13:07:27.499620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.526 [2024-10-15 13:07:27.499655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.499912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.499944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.500217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.500249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.500535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.500567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.500842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.500877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.501130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.501161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.501389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.501421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.501662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.501696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.501840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.527 [2024-10-15 13:07:27.501872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff120000b90 with addr=10.0.0.2, port=4420 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.502053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.527 [2024-10-15 13:07:27.504501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.504655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.504702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.504725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.504745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.504797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.527 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:07.527 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.527 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.527 [2024-10-15 13:07:27.514439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.514549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.514598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.514634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT c 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.527 ommand 00:27:07.527 [2024-10-15 13:07:27.514661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.514705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 13:07:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1372759 00:27:07.527 [2024-10-15 13:07:27.524439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.524522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.524549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.524563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.524576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.524612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.534444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.534512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.534532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.534541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.534550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.534571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.544427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.544483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.544497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.544503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.544510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.544524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.554439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.554492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.554506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.554519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.554525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.554540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.564463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.564518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.564532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.564539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.564545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.564559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.574512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.574572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.574586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.574592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-10-15 13:07:27.574599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.527 [2024-10-15 13:07:27.574617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-10-15 13:07:27.584620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-10-15 13:07:27.584727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-10-15 13:07:27.584741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-10-15 13:07:27.584747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.584753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.584768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.594575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.594631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.594645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.594652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.594657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.594673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.604607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.604661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.604675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.604682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.604687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.604702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.614626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.614681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.614694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.614701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.614707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.614721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.624712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.624769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.624782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.624788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.624794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.624809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.634709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.634760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.634773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.634779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.634785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.634800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.644688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.644760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.644777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.644784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.644791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.644806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.654772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.654868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.654886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.654894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.654900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.654916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.664765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.664819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.664834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.664841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.664847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.664862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.674825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.674886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.674900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.674907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.674912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.674927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.684823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.684879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.684893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.684900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.684906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.684925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.694857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.694912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.694926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.694933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.694939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.528 [2024-10-15 13:07:27.694954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-10-15 13:07:27.704851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-10-15 13:07:27.704903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-10-15 13:07:27.704917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-10-15 13:07:27.704923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-10-15 13:07:27.704929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.704945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.714822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.714875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.714888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.714895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.714901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.714915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.724929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.724983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.724996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.725002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.725008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.725023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.734929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.734986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.735003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.735010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.735016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.735029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.744981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.745032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.745045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.745051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.745057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.745072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.755003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.755066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.755079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.755086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.755092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.755106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.765019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.765072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.765085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.765092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.765098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.765112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.775072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.775156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.775170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.775176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.775182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.775199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.785092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.785145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.785158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.785164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.785170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.785184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.795113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.795170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.795184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.795190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.795196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.795211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.805142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.805197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.805210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.805216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.805222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.805237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.815166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.815220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.815233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.815240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.815245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.815260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.825194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.825248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.825265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.529 [2024-10-15 13:07:27.825271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.529 [2024-10-15 13:07:27.825277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.529 [2024-10-15 13:07:27.825292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.529 qpair failed and we were unable to recover it. 00:27:07.529 [2024-10-15 13:07:27.835235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.529 [2024-10-15 13:07:27.835289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.529 [2024-10-15 13:07:27.835302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.530 [2024-10-15 13:07:27.835309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.530 [2024-10-15 13:07:27.835315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.530 [2024-10-15 13:07:27.835329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.530 qpair failed and we were unable to recover it. 00:27:07.530 [2024-10-15 13:07:27.845251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.530 [2024-10-15 13:07:27.845307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.530 [2024-10-15 13:07:27.845319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.530 [2024-10-15 13:07:27.845325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.530 [2024-10-15 13:07:27.845331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.530 [2024-10-15 13:07:27.845346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.530 qpair failed and we were unable to recover it. 00:27:07.790 [2024-10-15 13:07:27.855297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-10-15 13:07:27.855353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-10-15 13:07:27.855366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.855373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.855379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.855393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.865269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.865322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.865335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.865342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.865351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.865366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.875353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.875417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.875430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.875437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.875443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.875457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.885360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.885415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.885429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.885436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.885442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.885456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.895458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.895559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.895574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.895581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.895587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.895606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.905458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.905563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.905577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.905583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.905589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.905607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.915454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.915511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.915524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.915530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.915536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.915550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.925482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.925535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.925548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.925554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.925560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.925575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.935579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.935644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.935659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.935665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.935671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.935685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.945559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-10-15 13:07:27.945636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-10-15 13:07:27.945649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-10-15 13:07:27.945656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-10-15 13:07:27.945661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.791 [2024-10-15 13:07:27.945676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-10-15 13:07:27.955562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:27.955614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:27.955628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:27.955638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:27.955644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:27.955659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:27.965591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:27.965648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:27.965661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:27.965668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:27.965674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:27.965688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:27.975562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:27.975652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:27.975665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:27.975671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:27.975677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:27.975691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:27.985649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:27.985701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:27.985715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:27.985721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:27.985727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:27.985742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:27.995684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:27.995791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:27.995805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:27.995812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:27.995817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:27.995832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.005707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:28.005807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:28.005820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:28.005827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:28.005833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:28.005848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.015765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:28.015820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:28.015834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:28.015840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:28.015846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:28.015860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.025786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:28.025844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:28.025857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:28.025864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:28.025870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:28.025884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.035817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:28.035906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:28.035921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:28.035929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:28.035936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:28.035952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.045829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.792 [2024-10-15 13:07:28.045884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.792 [2024-10-15 13:07:28.045898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.792 [2024-10-15 13:07:28.045907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.792 [2024-10-15 13:07:28.045913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.792 [2024-10-15 13:07:28.045928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.792 qpair failed and we were unable to recover it. 00:27:07.792 [2024-10-15 13:07:28.055902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.056008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.056021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.056028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.056033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.056048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:07.793 [2024-10-15 13:07:28.065891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.065947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.065961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.065967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.065973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.065987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:07.793 [2024-10-15 13:07:28.075926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.075996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.076009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.076016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.076022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.076036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:07.793 [2024-10-15 13:07:28.085982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.086033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.086048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.086054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.086060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.086076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:07.793 [2024-10-15 13:07:28.095983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.096058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.096073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.096079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.096085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.096100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:07.793 [2024-10-15 13:07:28.105964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.793 [2024-10-15 13:07:28.106021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.793 [2024-10-15 13:07:28.106034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.793 [2024-10-15 13:07:28.106041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.793 [2024-10-15 13:07:28.106047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:07.793 [2024-10-15 13:07:28.106061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.793 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.115974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.053 [2024-10-15 13:07:28.116061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.053 [2024-10-15 13:07:28.116075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.053 [2024-10-15 13:07:28.116081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.053 [2024-10-15 13:07:28.116087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.053 [2024-10-15 13:07:28.116102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.126006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.053 [2024-10-15 13:07:28.126075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.053 [2024-10-15 13:07:28.126088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.053 [2024-10-15 13:07:28.126095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.053 [2024-10-15 13:07:28.126100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.053 [2024-10-15 13:07:28.126115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.136057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.053 [2024-10-15 13:07:28.136112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.053 [2024-10-15 13:07:28.136128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.053 [2024-10-15 13:07:28.136135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.053 [2024-10-15 13:07:28.136141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.053 [2024-10-15 13:07:28.136155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.146050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.053 [2024-10-15 13:07:28.146108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.053 [2024-10-15 13:07:28.146122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.053 [2024-10-15 13:07:28.146129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.053 [2024-10-15 13:07:28.146135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.053 [2024-10-15 13:07:28.146151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.156069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.053 [2024-10-15 13:07:28.156121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.053 [2024-10-15 13:07:28.156136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.053 [2024-10-15 13:07:28.156142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.053 [2024-10-15 13:07:28.156148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.053 [2024-10-15 13:07:28.156162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-10-15 13:07:28.166184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.166248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.166261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.166267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.166273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.166288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.176212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.176267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.176281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.176287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.176294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.176312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.186225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.186282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.186295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.186301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.186307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.186321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.196187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.196241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.196255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.196263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.196270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.196285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.206257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.206308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.206321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.206327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.206334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.206348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.216244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.216299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.216312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.216319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.216325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.216340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.226342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.226396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.226413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.226420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.226426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.226441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.236361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.236419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.236432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.236439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.236446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.236460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.246398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.246492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.246506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.246512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.246518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.246532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.256362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.054 [2024-10-15 13:07:28.256419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.054 [2024-10-15 13:07:28.256433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.054 [2024-10-15 13:07:28.256439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.054 [2024-10-15 13:07:28.256445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.054 [2024-10-15 13:07:28.256460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.054 [2024-10-15 13:07:28.266483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.266542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.266555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.266562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.266568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.266588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.276481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.276544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.276582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.276589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.276595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.276625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.286500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.286549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.286564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.286570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.286576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.286592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.296532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.296588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.296607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.296614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.296619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.296635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.306659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.306721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.306734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.306740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.306746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.306761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.316620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.316678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.316695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.316702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.316707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.316721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.326579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.326633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.326647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.326653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.326659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.326674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.336671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.336727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.336740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.336747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.336753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.336768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.346681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.346737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.346750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.346756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.346762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.346777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.356714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.356790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.356804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.356811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.356820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.356834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.055 [2024-10-15 13:07:28.366761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.055 [2024-10-15 13:07:28.366844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.055 [2024-10-15 13:07:28.366857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.055 [2024-10-15 13:07:28.366863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.055 [2024-10-15 13:07:28.366869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.055 [2024-10-15 13:07:28.366884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.055 qpair failed and we were unable to recover it. 00:27:08.315 [2024-10-15 13:07:28.376699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.315 [2024-10-15 13:07:28.376753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.315 [2024-10-15 13:07:28.376766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.315 [2024-10-15 13:07:28.376773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.315 [2024-10-15 13:07:28.376780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.315 [2024-10-15 13:07:28.376794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-10-15 13:07:28.386788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.315 [2024-10-15 13:07:28.386841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.315 [2024-10-15 13:07:28.386855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.315 [2024-10-15 13:07:28.386862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.315 [2024-10-15 13:07:28.386868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.386883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.396753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.396809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.396823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.396830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.396836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.396850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.406773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.406831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.406844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.406850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.406856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.406870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.416865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.416922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.416936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.416942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.416948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.416962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.426890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.426945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.426958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.426964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.426970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.426984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.436913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.436969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.436982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.436988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.436994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.437008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.446955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.447008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.447021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.447027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.447039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.447054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.456996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.457063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.457076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.457083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.457089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.457103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.467018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.467076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.467089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.467095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.467101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.467116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.477074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.477159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.477171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.477178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.477183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.477198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.487070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.487143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.487156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.487163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.487169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.487183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.497091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.497144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.497158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.497164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.497170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.497184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.507126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.507203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.507216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.507223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.507228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.507242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.517158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.517206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.517219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.517225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.517231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.517246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-10-15 13:07:28.527172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.316 [2024-10-15 13:07:28.527228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.316 [2024-10-15 13:07:28.527241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.316 [2024-10-15 13:07:28.527247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.316 [2024-10-15 13:07:28.527253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.316 [2024-10-15 13:07:28.527267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.537206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.537261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.537275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.537285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.537291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.537305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.547158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.547207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.547221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.547228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.547233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.547248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.557267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.557320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.557333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.557340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.557346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.557360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.567275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.567326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.567339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.567345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.567351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.567365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.577355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.577458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.577471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.577478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.577483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.577498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.587327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.587398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.587412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.587418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.587424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.587438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.597377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.597455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.597468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.597474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.597480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.597494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.607397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.607451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.607465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.607471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.607477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.607492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.617470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.617575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.617588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.617594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.617604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.617619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-10-15 13:07:28.627452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.317 [2024-10-15 13:07:28.627505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.317 [2024-10-15 13:07:28.627518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.317 [2024-10-15 13:07:28.627528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.317 [2024-10-15 13:07:28.627534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.317 [2024-10-15 13:07:28.627548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.637410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.637503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.637517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.637524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.637529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.637544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.647537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.647593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.647613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.647623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.647629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.647645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.657535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.657589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.657607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.657614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.657620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.657635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.667570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.667622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.667636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.667642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.667647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.667662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.677639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.677693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.677706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.677713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.677719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.677734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.687625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.687683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.687696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.687703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.687709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.577 [2024-10-15 13:07:28.687723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.577 qpair failed and we were unable to recover it. 00:27:08.577 [2024-10-15 13:07:28.697653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.577 [2024-10-15 13:07:28.697706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.577 [2024-10-15 13:07:28.697719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.577 [2024-10-15 13:07:28.697725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.577 [2024-10-15 13:07:28.697731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.697746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.707717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.707771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.707784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.707790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.707796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.707811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.717699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.717753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.717769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.717776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.717782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.717796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.727661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.727711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.727724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.727730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.727736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.727751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.737767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.737822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.737835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.737841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.737847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.737861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.747779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.747835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.747847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.747854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.747859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.747873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.757804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.757854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.757867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.757873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.757880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.757897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.767846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.767901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.767914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.767920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.767926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.767940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.777922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.778021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.778034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.778041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.778047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.778061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.787908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.787959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.787973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.787979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.787986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.788000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.797933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.797998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.798012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.798018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.798024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.798039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.807950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.808002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.808019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.808025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.808031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.808046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.818005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.818063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.818076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.818081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.818087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.818101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.828014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.828066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.828079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.828085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.828091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.828105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.578 qpair failed and we were unable to recover it. 00:27:08.578 [2024-10-15 13:07:28.838044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.578 [2024-10-15 13:07:28.838090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.578 [2024-10-15 13:07:28.838104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.578 [2024-10-15 13:07:28.838110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.578 [2024-10-15 13:07:28.838116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.578 [2024-10-15 13:07:28.838130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.848116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.579 [2024-10-15 13:07:28.848171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.579 [2024-10-15 13:07:28.848184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.579 [2024-10-15 13:07:28.848191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.579 [2024-10-15 13:07:28.848200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.579 [2024-10-15 13:07:28.848214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.858147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.579 [2024-10-15 13:07:28.858202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.579 [2024-10-15 13:07:28.858215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.579 [2024-10-15 13:07:28.858222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.579 [2024-10-15 13:07:28.858228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.579 [2024-10-15 13:07:28.858242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.868152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.579 [2024-10-15 13:07:28.868205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.579 [2024-10-15 13:07:28.868219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.579 [2024-10-15 13:07:28.868225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.579 [2024-10-15 13:07:28.868230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.579 [2024-10-15 13:07:28.868245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.878083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.579 [2024-10-15 13:07:28.878136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.579 [2024-10-15 13:07:28.878149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.579 [2024-10-15 13:07:28.878155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.579 [2024-10-15 13:07:28.878161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.579 [2024-10-15 13:07:28.878176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.888188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.579 [2024-10-15 13:07:28.888255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.579 [2024-10-15 13:07:28.888268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.579 [2024-10-15 13:07:28.888275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.579 [2024-10-15 13:07:28.888281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.579 [2024-10-15 13:07:28.888296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.579 qpair failed and we were unable to recover it. 00:27:08.579 [2024-10-15 13:07:28.898214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.839 [2024-10-15 13:07:28.898289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.839 [2024-10-15 13:07:28.898310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.839 [2024-10-15 13:07:28.898320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.839 [2024-10-15 13:07:28.898327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.839 [2024-10-15 13:07:28.898345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-10-15 13:07:28.908217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.839 [2024-10-15 13:07:28.908278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.839 [2024-10-15 13:07:28.908293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.839 [2024-10-15 13:07:28.908299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.839 [2024-10-15 13:07:28.908305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.839 [2024-10-15 13:07:28.908320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-10-15 13:07:28.918265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.839 [2024-10-15 13:07:28.918325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.839 [2024-10-15 13:07:28.918339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.839 [2024-10-15 13:07:28.918346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.918351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.918365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.928293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.928348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.928361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.928367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.928373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.928388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.938330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.938394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.938408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.938415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.938424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.938439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.948386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.948491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.948505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.948511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.948517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.948531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.958372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.958423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.958437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.958443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.958449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.958463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.968394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.968445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.968458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.968464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.968469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.968484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.978451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.978503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.978516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.978522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.978528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.978542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.988472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.988525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.988539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.988545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.988551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.988566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:28.998475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:28.998529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:28.998543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:28.998549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:28.998555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:28.998570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.008514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.008564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.008577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.008583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.008589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:29.008607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.018554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.018612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.018625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.018631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.018637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:29.018651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.028517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.028612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.028626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.028635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.028641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:29.028655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.038611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.038667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.038681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.038687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.038693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:29.038707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.048653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.048738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.048752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.048758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.048764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.840 [2024-10-15 13:07:29.048778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-10-15 13:07:29.058664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-10-15 13:07:29.058724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-10-15 13:07:29.058738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-10-15 13:07:29.058745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-10-15 13:07:29.058750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.058765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.068692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.068745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.068759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.068764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.068771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.068785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.078760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.078818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.078833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.078840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.078845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.078860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.088742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.088794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.088807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.088813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.088819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.088833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.098790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.098843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.098857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.098863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.098869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.098883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.108812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.108867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.108881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.108887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.108893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.108907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.118844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.118895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.118909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.118919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.118925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.118939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.128870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.128926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.128939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.128946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.128951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.128966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.138905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.138975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.138988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.138994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.139000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.139014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.148934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.148992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.149006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.149012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.149018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.149033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-10-15 13:07:29.158945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-10-15 13:07:29.159010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-10-15 13:07:29.159024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-10-15 13:07:29.159031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-10-15 13:07:29.159037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:08.841 [2024-10-15 13:07:29.159052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.841 qpair failed and we were unable to recover it. 00:27:09.101 [2024-10-15 13:07:29.168993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.101 [2024-10-15 13:07:29.169045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.101 [2024-10-15 13:07:29.169058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.101 [2024-10-15 13:07:29.169065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.101 [2024-10-15 13:07:29.169071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.101 [2024-10-15 13:07:29.169085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.101 qpair failed and we were unable to recover it. 00:27:09.101 [2024-10-15 13:07:29.179019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.101 [2024-10-15 13:07:29.179073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.101 [2024-10-15 13:07:29.179087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.101 [2024-10-15 13:07:29.179094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.101 [2024-10-15 13:07:29.179099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.101 [2024-10-15 13:07:29.179114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.101 qpair failed and we were unable to recover it. 00:27:09.101 [2024-10-15 13:07:29.189103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.189159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.189172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.189179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.189185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.189199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.199065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.199119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.199133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.199139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.199145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.199159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.209065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.209120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.209137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.209143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.209149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.209164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.219134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.219204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.219217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.219224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.219230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.219244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.229212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.229315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.229329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.229335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.229341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.229355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.239178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.239251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.239264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.239270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.239276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.239290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.249207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.249258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.249271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.249277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.249283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.249300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.259257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.259362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.259375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.259382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.259387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.259401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.269194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.269247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.269260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.269266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.269272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.269286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.279221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.279274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.279286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.279293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.279299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.279312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.289312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.289366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.289378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.289385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.289391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.289405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.299381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.299485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.299504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.299511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.299516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.299532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.309383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.309434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.309448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.309454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.309461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.309475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.102 [2024-10-15 13:07:29.319416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.102 [2024-10-15 13:07:29.319487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.102 [2024-10-15 13:07:29.319501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.102 [2024-10-15 13:07:29.319508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.102 [2024-10-15 13:07:29.319514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.102 [2024-10-15 13:07:29.319530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.102 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.329424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.329471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.329485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.329492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.329499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.329514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.339399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.339480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.339493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.339500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.339506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.339524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.349509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.349564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.349578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.349584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.349590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.349609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.359461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.359514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.359527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.359533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.359539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.359554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.369540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.369591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.369608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.369615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.369620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.369635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.379647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.379703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.379718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.379724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.379729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.379745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.389643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.389707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.389721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.389727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.389733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.389747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.399630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.399699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.399713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.399721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.399727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.399743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.409657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.409756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.409769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.409776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.409782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.409796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-10-15 13:07:29.419632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-10-15 13:07:29.419691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-10-15 13:07:29.419705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-10-15 13:07:29.419711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-10-15 13:07:29.419717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.103 [2024-10-15 13:07:29.419731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.429720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.429775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.429789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.429795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.429805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.429820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.439694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.439749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.439763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.439770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.439776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.439790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.449694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.449749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.449762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.449769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.449774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.449788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.459815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.459871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.459885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.459891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.459897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.459911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.469760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.469818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.469830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.469837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.469843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.469858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.479857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.479913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.479926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.479932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.479938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.479953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.489926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.489988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.490001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.490007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.490013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.490027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.499913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.499970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.499983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.499990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.363 [2024-10-15 13:07:29.499996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.363 [2024-10-15 13:07:29.500009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.363 qpair failed and we were unable to recover it. 00:27:09.363 [2024-10-15 13:07:29.509942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.363 [2024-10-15 13:07:29.509999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.363 [2024-10-15 13:07:29.510012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.363 [2024-10-15 13:07:29.510019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.510025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.510039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.519950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.520001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.520014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.520024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.520030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.520044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.529917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.529973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.529986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.529992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.529998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.530012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.539957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.540014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.540027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.540034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.540040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.540054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.549977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.550034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.550046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.550052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.550058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.550073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.560075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.560128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.560141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.560147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.560153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.560167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.570081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.570130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.570143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.570149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.570154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.570169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.580110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.580188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.580202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.580208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.580214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.580229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.590146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.590229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.590242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.590248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.590253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.590268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.600167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.600216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.600229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.600235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.600241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.600255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.610140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.610192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.610205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.610215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.610221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.610236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.620228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.620283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.620297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.620303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.620309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.620323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.630275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.630333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.630346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.630353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.630359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.630374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.640362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.640422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.640435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.640441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.640447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.640461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.650280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.650340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.650355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.650361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.650368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.650383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.660411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.660466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.660481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.660487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.660493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.660508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.670398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.670451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.670465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.670471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.670477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.670492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.364 [2024-10-15 13:07:29.680408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.364 [2024-10-15 13:07:29.680462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.364 [2024-10-15 13:07:29.680475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.364 [2024-10-15 13:07:29.680482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.364 [2024-10-15 13:07:29.680488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.364 [2024-10-15 13:07:29.680502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.364 qpair failed and we were unable to recover it. 00:27:09.624 [2024-10-15 13:07:29.690421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.624 [2024-10-15 13:07:29.690474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.624 [2024-10-15 13:07:29.690488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.624 [2024-10-15 13:07:29.690494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.624 [2024-10-15 13:07:29.690500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.624 [2024-10-15 13:07:29.690516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-10-15 13:07:29.700525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.624 [2024-10-15 13:07:29.700583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.624 [2024-10-15 13:07:29.700605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.624 [2024-10-15 13:07:29.700612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.624 [2024-10-15 13:07:29.700617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.624 [2024-10-15 13:07:29.700633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-10-15 13:07:29.710490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.624 [2024-10-15 13:07:29.710547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.624 [2024-10-15 13:07:29.710560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.624 [2024-10-15 13:07:29.710566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.624 [2024-10-15 13:07:29.710572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.624 [2024-10-15 13:07:29.710587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-10-15 13:07:29.720508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.624 [2024-10-15 13:07:29.720563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.624 [2024-10-15 13:07:29.720577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.624 [2024-10-15 13:07:29.720584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.624 [2024-10-15 13:07:29.720590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.624 [2024-10-15 13:07:29.720608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-10-15 13:07:29.730530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.730584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.730597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.730608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.730614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.730629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.740570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.740633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.740648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.740654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.740660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.740678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.750526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.750577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.750590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.750596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.750607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.750621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.760530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.760581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.760595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.760606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.760612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.760626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.770644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.770692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.770705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.770711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.770717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.770731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.780619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.780673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.780687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.780693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.780699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.780714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.790711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.790765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.790782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.790788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.790794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.790809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.800682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.800735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.800748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.800755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.800761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.800775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.810772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.810826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.810840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.810846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.810852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.810866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.820810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.820876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.820889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.820895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.820901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.820916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.830830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.830886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.830900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.830906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.830912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.830930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.840887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.840936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.840949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.840955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.840961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.840975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.850825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.850903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.850916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.850922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.850928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.850942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.860846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.860902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.625 [2024-10-15 13:07:29.860915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.625 [2024-10-15 13:07:29.860921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.625 [2024-10-15 13:07:29.860927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.625 [2024-10-15 13:07:29.860941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-10-15 13:07:29.870913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.625 [2024-10-15 13:07:29.870964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.870977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.870983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.870989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.871004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.880960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.881013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.881031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.881037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.881042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.881057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.890978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.891029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.891042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.891048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.891054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.891068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.900990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.901045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.901060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.901067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.901074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.901090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.911050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.911105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.911120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.911126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.911132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.911147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.921118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.921171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.921185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.921191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.921203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.921218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.931097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.931144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.931157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.931163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.931169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.931183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-10-15 13:07:29.941144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.626 [2024-10-15 13:07:29.941199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.626 [2024-10-15 13:07:29.941213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.626 [2024-10-15 13:07:29.941219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.626 [2024-10-15 13:07:29.941225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.626 [2024-10-15 13:07:29.941240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:29.951094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:29.951149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:29.951163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:29.951170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:29.951177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:29.951192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:29.961234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:29.961296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:29.961309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:29.961315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:29.961321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:29.961336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:29.971221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:29.971274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:29.971288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:29.971294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:29.971300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:29.971314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:29.981249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:29.981306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:29.981321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:29.981328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:29.981334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:29.981349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:29.991284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:29.991345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:29.991359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:29.991366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:29.991372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:29.991386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:30.001303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:30.001352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:30.001365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:30.001372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:30.001377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:30.001392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:30.011371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:30.011426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:30.011440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:30.011446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:30.011455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.928 [2024-10-15 13:07:30.011470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.928 qpair failed and we were unable to recover it. 00:27:09.928 [2024-10-15 13:07:30.021348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.928 [2024-10-15 13:07:30.021406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.928 [2024-10-15 13:07:30.021422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.928 [2024-10-15 13:07:30.021429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.928 [2024-10-15 13:07:30.021435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.021452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.031333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.031391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.031412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.031419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.031424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.031439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.041450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.041514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.041528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.041534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.041540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.041554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.051457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.051513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.051527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.051533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.051539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.051554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.061495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.061547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.061562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.061568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.061574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.061589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.071458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.071539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.071553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.071560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.071566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.071580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.081522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.081575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.081589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.081595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.081604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.081619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.091555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.091617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.091631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.091638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.091644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.091659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.101644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.101713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.101726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.101736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.101742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.101757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.111631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.111681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.111694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.111701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.111707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.111722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.121660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.121717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.121730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.121737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.121743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.121757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.929 [2024-10-15 13:07:30.131663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.929 [2024-10-15 13:07:30.131718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.929 [2024-10-15 13:07:30.131732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.929 [2024-10-15 13:07:30.131739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.929 [2024-10-15 13:07:30.131744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.929 [2024-10-15 13:07:30.131758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.929 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.141719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.141813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.141826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.141833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.141839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.141854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.151726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.151784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.151799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.151807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.151812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.151828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.161699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.161748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.161762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.161769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.161775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.161790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.171752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.171816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.171830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.171836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.171842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.171856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.181832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.181905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.181919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.181925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.181931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.181946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.191842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.191900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.191914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.191924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.191929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.191944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.201897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.201948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.201961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.201967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.201973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.201987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.211901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.211959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.211973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.211979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.211985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.211999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.221956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.222009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.222023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.222029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.222036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.222050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.231974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.232031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.232044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.232050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.930 [2024-10-15 13:07:30.232056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.930 [2024-10-15 13:07:30.232070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.930 qpair failed and we were unable to recover it. 00:27:09.930 [2024-10-15 13:07:30.241983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.930 [2024-10-15 13:07:30.242050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.930 [2024-10-15 13:07:30.242063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.930 [2024-10-15 13:07:30.242069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.931 [2024-10-15 13:07:30.242075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:09.931 [2024-10-15 13:07:30.242090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.931 qpair failed and we were unable to recover it. 00:27:10.192 [2024-10-15 13:07:30.252019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.192 [2024-10-15 13:07:30.252087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.192 [2024-10-15 13:07:30.252100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.192 [2024-10-15 13:07:30.252107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.192 [2024-10-15 13:07:30.252113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.192 [2024-10-15 13:07:30.252127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.192 qpair failed and we were unable to recover it. 00:27:10.192 [2024-10-15 13:07:30.262059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.192 [2024-10-15 13:07:30.262114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.192 [2024-10-15 13:07:30.262127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.192 [2024-10-15 13:07:30.262133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.192 [2024-10-15 13:07:30.262139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.192 [2024-10-15 13:07:30.262154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.192 qpair failed and we were unable to recover it. 00:27:10.192 [2024-10-15 13:07:30.272075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.192 [2024-10-15 13:07:30.272153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.192 [2024-10-15 13:07:30.272166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.192 [2024-10-15 13:07:30.272173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.192 [2024-10-15 13:07:30.272179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.192 [2024-10-15 13:07:30.272193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.192 qpair failed and we were unable to recover it. 00:27:10.192 [2024-10-15 13:07:30.282069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.192 [2024-10-15 13:07:30.282150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.192 [2024-10-15 13:07:30.282168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.192 [2024-10-15 13:07:30.282174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.192 [2024-10-15 13:07:30.282180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.192 [2024-10-15 13:07:30.282194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.192 qpair failed and we were unable to recover it. 00:27:10.192 [2024-10-15 13:07:30.292123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.192 [2024-10-15 13:07:30.292177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.192 [2024-10-15 13:07:30.292190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.292196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.292201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.292216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.302179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.302242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.302256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.302262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.302268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.302282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.312212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.312274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.312287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.312293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.312299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.312313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.322260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.322328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.322349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.322356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.322362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.322384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.332232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.332287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.332301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.332308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.332314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.332329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.342319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.342379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.342392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.342399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.342405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.342419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.352312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.352369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.352382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.352389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.352394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.352409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.362320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.362373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.362386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.362392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.362398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.362413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.372347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.372401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.372418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.372425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.372431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.372445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.382412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.193 [2024-10-15 13:07:30.382467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.193 [2024-10-15 13:07:30.382481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.193 [2024-10-15 13:07:30.382487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.193 [2024-10-15 13:07:30.382493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.193 [2024-10-15 13:07:30.382508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.193 qpair failed and we were unable to recover it. 00:27:10.193 [2024-10-15 13:07:30.392431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.392487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.392501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.392509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.392514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.392529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.402434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.402522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.402537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.402544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.402551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.402566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.412469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.412522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.412537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.412544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.412554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.412569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.422519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.422575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.422589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.422596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.422606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.422622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.432561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.432619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.432632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.432639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.432645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.432659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.442476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.442526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.442539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.442546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.442551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.442566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.452592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.452665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.452679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.452685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.452691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.452706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.462620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.462686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.462699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.462706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.462712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.462726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.472639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.472695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.472708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.472715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.472721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.472735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.194 [2024-10-15 13:07:30.482676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.194 [2024-10-15 13:07:30.482732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.194 [2024-10-15 13:07:30.482745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.194 [2024-10-15 13:07:30.482752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.194 [2024-10-15 13:07:30.482758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.194 [2024-10-15 13:07:30.482772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.194 qpair failed and we were unable to recover it. 00:27:10.195 [2024-10-15 13:07:30.492681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.195 [2024-10-15 13:07:30.492737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.195 [2024-10-15 13:07:30.492750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.195 [2024-10-15 13:07:30.492756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.195 [2024-10-15 13:07:30.492762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.195 [2024-10-15 13:07:30.492776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.195 qpair failed and we were unable to recover it. 00:27:10.195 [2024-10-15 13:07:30.502739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.195 [2024-10-15 13:07:30.502792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.195 [2024-10-15 13:07:30.502806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.195 [2024-10-15 13:07:30.502812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.195 [2024-10-15 13:07:30.502821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.195 [2024-10-15 13:07:30.502836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.195 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.512755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.512814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.512828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.512835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.512841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.512856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.522803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.522855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.522868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.522874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.522881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.522895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.532813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.532869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.532882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.532888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.532894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.532907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.542862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.542932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.542944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.542951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.542957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.542972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.552878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.552928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.552941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.552947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.552953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.552968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.562889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.562986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.562999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.563006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.563012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.563026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.572962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.573020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.573033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.573040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.455 [2024-10-15 13:07:30.573046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.455 [2024-10-15 13:07:30.573060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-10-15 13:07:30.582992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.455 [2024-10-15 13:07:30.583049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.455 [2024-10-15 13:07:30.583062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.455 [2024-10-15 13:07:30.583068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.583074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.583089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.592981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.593033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.593047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.593056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.593062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.593076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.603007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.603093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.603106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.603113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.603119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.603133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.613025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.613073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.613087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.613093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.613099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.613113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.623076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.623131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.623144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.623150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.623156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.623170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.633074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.633132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.633145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.633152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.633157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.633172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.643053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.643109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.643122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.643129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.643135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.643149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.653165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.653225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.653239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.653246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.653252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.653266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.663162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.663218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.663232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.663238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.663244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.663259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.673241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.673300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.673314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.673321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.673327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.673341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.683219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.683275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.683289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.683308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.683314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.683328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.693308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.693378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.693392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.693398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.693404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.693418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.703338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.703393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.703407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.703413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.703419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.703434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.713337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.713440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.713454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.713460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.456 [2024-10-15 13:07:30.713466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.456 [2024-10-15 13:07:30.713481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-10-15 13:07:30.723331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.456 [2024-10-15 13:07:30.723384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.456 [2024-10-15 13:07:30.723398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.456 [2024-10-15 13:07:30.723404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.723410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.723425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-10-15 13:07:30.733382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.457 [2024-10-15 13:07:30.733431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.457 [2024-10-15 13:07:30.733444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.457 [2024-10-15 13:07:30.733450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.733456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.733471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-10-15 13:07:30.743416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.457 [2024-10-15 13:07:30.743470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.457 [2024-10-15 13:07:30.743483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.457 [2024-10-15 13:07:30.743490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.743496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.743510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-10-15 13:07:30.753438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.457 [2024-10-15 13:07:30.753494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.457 [2024-10-15 13:07:30.753508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.457 [2024-10-15 13:07:30.753514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.753520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.753534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-10-15 13:07:30.763491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.457 [2024-10-15 13:07:30.763546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.457 [2024-10-15 13:07:30.763559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.457 [2024-10-15 13:07:30.763565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.763571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.763586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-10-15 13:07:30.773478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.457 [2024-10-15 13:07:30.773526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.457 [2024-10-15 13:07:30.773543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.457 [2024-10-15 13:07:30.773549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.457 [2024-10-15 13:07:30.773555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.457 [2024-10-15 13:07:30.773570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.783512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.783605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.783619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.783626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.783632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.783647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.793532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.793589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.793606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.793613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.793619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.793634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.803533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.803584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.803598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.803608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.803614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.803629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.813594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.813649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.813662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.813669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.813675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.813692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.823643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.823695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.823708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.823715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.823721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.823735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.833642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.833696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.833709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.717 [2024-10-15 13:07:30.833715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.717 [2024-10-15 13:07:30.833721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.717 [2024-10-15 13:07:30.833736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.717 qpair failed and we were unable to recover it. 00:27:10.717 [2024-10-15 13:07:30.843690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.717 [2024-10-15 13:07:30.843748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.717 [2024-10-15 13:07:30.843761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.843768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.843774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.843788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.853696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.853753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.853769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.853776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.853782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.853796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.863733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.863787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.863804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.863811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.863816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.863831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.873693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.873748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.873762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.873768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.873774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.873788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.883811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.883869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.883882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.883889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.883895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.883909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.893754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.893805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.893819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.893825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.893831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.893846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.903788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.903861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.903874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.903881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.903886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.903904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.913851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.913957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.913971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.913977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.913983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.913999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.923833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.923884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.923897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.923903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.923909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.923924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.933901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.933968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.933982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.933989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.933995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.934010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.943973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.944063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.944077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.944083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.944089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.944104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.954012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.954074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.954087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.954093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.954099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.954113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.964000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.964053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.964066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.964072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.964078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.964092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.974057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.974115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.974136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.974142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.974148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.718 [2024-10-15 13:07:30.974163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.718 qpair failed and we were unable to recover it. 00:27:10.718 [2024-10-15 13:07:30.984096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.718 [2024-10-15 13:07:30.984149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.718 [2024-10-15 13:07:30.984162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.718 [2024-10-15 13:07:30.984169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.718 [2024-10-15 13:07:30.984174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:30.984189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.719 [2024-10-15 13:07:30.994039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.719 [2024-10-15 13:07:30.994098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.719 [2024-10-15 13:07:30.994112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.719 [2024-10-15 13:07:30.994118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.719 [2024-10-15 13:07:30.994128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:30.994143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.719 [2024-10-15 13:07:31.004103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.719 [2024-10-15 13:07:31.004197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.719 [2024-10-15 13:07:31.004211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.719 [2024-10-15 13:07:31.004217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.719 [2024-10-15 13:07:31.004223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:31.004237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.719 [2024-10-15 13:07:31.014150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.719 [2024-10-15 13:07:31.014203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.719 [2024-10-15 13:07:31.014216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.719 [2024-10-15 13:07:31.014222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.719 [2024-10-15 13:07:31.014228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:31.014242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.719 [2024-10-15 13:07:31.024238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.719 [2024-10-15 13:07:31.024340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.719 [2024-10-15 13:07:31.024353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.719 [2024-10-15 13:07:31.024359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.719 [2024-10-15 13:07:31.024365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:31.024379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.719 [2024-10-15 13:07:31.034149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.719 [2024-10-15 13:07:31.034202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.719 [2024-10-15 13:07:31.034216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.719 [2024-10-15 13:07:31.034222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.719 [2024-10-15 13:07:31.034228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.719 [2024-10-15 13:07:31.034242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.719 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.044180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.044237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.044251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.044257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.044264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.044278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.054198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.054248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.054262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.054268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.054274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.054288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.064311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.064405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.064418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.064425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.064430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.064446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.074359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.074415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.074428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.074435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.074441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.074456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.084281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.084335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.084349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.084359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.084366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.084380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.979 [2024-10-15 13:07:31.094375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.979 [2024-10-15 13:07:31.094465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.979 [2024-10-15 13:07:31.094479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.979 [2024-10-15 13:07:31.094486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.979 [2024-10-15 13:07:31.094491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.979 [2024-10-15 13:07:31.094506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.979 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.104412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.104469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.104482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.104489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.104495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.104509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.114364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.114460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.114473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.114479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.114485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.114499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.124457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.124513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.124527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.124533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.124539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.124554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.134485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.134541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.134554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.134560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.134566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.134580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.144505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.144712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.144728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.144734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.144741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.144756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.154546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.154611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.154627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.154633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.154640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.154654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.164615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.164671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.164685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.164691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.164697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.164713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.174624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.174681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.174695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.174704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.174710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.174724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.184619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.184691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.184706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.184713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.184718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.184733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.194682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.194739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.194753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.194759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.194765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.194780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.204688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.204740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.204754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.204761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.204767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.204782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.214704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.214756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.214769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.214776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.214781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.214796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.224673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.224728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.224741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.224747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.224753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.224768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.234793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.234847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.980 [2024-10-15 13:07:31.234861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.980 [2024-10-15 13:07:31.234867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.980 [2024-10-15 13:07:31.234873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.980 [2024-10-15 13:07:31.234887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.980 qpair failed and we were unable to recover it. 00:27:10.980 [2024-10-15 13:07:31.244741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.980 [2024-10-15 13:07:31.244796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.244809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.244816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.244822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.244836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:10.981 [2024-10-15 13:07:31.254832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.981 [2024-10-15 13:07:31.254891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.254904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.254911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.254916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.254930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:10.981 [2024-10-15 13:07:31.264835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.981 [2024-10-15 13:07:31.264889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.264906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.264913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.264919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.264933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:10.981 [2024-10-15 13:07:31.274951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.981 [2024-10-15 13:07:31.275011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.275024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.275031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.275037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.275051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:10.981 [2024-10-15 13:07:31.284903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.981 [2024-10-15 13:07:31.284957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.284971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.284977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.284983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.284998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:10.981 [2024-10-15 13:07:31.294932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.981 [2024-10-15 13:07:31.294988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.981 [2024-10-15 13:07:31.295001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.981 [2024-10-15 13:07:31.295008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.981 [2024-10-15 13:07:31.295013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:10.981 [2024-10-15 13:07:31.295028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.981 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.304976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.305032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.305045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.305052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.305058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.305076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.315000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.315055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.315068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.315075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.315081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.315095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.325027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.325082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.325095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.325102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.325108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.325122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.335075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.335132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.335144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.335150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.335156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.335170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.345097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.345149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.345162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.345168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.345174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.345189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.355157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.355216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.355232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.355239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.355245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.355259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.365150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.365205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.365218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.365225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.365230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.365245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.375226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.375293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.375306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.375312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.375318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.375332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.385208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.385264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.385277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.385284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.385290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.385305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.395232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.395287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.395300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.395306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.395312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.395329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.405258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.405312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.405325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.405331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.405337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.405351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.415288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.415366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.415379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.415386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.415391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.242 [2024-10-15 13:07:31.415405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.242 qpair failed and we were unable to recover it. 00:27:11.242 [2024-10-15 13:07:31.425370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.242 [2024-10-15 13:07:31.425474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.242 [2024-10-15 13:07:31.425488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.242 [2024-10-15 13:07:31.425495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.242 [2024-10-15 13:07:31.425501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.425516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.435350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.435404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.435418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.435424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.435430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.435445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.445398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.445459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.445479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.445485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.445491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.445505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.455398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.455449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.455463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.455469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.455475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.455489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.465433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.465493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.465506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.465513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.465519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.465533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.475465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.475517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.475530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.475537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.475543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.475557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.485478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.485532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.485546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.485552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.485562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.485576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.495489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.495542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.495556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.495563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.495568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.495583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.505523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.505626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.505640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.505647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.505653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.505668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.515559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.515617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.515631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.515638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.515644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.515658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.525586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.525637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.525650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.525657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.525662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.525677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.535633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.535695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.535707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.535714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.535720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.535734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.545654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.545750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.545763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.545770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.545775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.545790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.243 [2024-10-15 13:07:31.555681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.243 [2024-10-15 13:07:31.555736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.243 [2024-10-15 13:07:31.555750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.243 [2024-10-15 13:07:31.555756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.243 [2024-10-15 13:07:31.555762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.243 [2024-10-15 13:07:31.555776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.243 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.565720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.565784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.565798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.565805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.565811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.565826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.575723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.575778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.575791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.575798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.575807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.575822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.585688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.585776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.585790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.585796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.585802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.585817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.595777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.595833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.595847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.595853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.595859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.595874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.605828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.605884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.605897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.605904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.605910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.605924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.615874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.615930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.615942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.504 [2024-10-15 13:07:31.615948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.504 [2024-10-15 13:07:31.615955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.504 [2024-10-15 13:07:31.615969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.504 qpair failed and we were unable to recover it. 00:27:11.504 [2024-10-15 13:07:31.625866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.504 [2024-10-15 13:07:31.625926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.504 [2024-10-15 13:07:31.625939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.625946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.625952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.625966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.635899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.635963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.635976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.635982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.635989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.636003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.645946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.646054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.646067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.646074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.646079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.646094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.655936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.655990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.656003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.656009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.656015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.656030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.665978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.666033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.666047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.666058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.666063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.666078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.676018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.676080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.676093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.676100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.676105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.676120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.685954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.686003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.686016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.686022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.686028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.686043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.696073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.696131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.696144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.696150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.696156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.696170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.706078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.706139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.706152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.706158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.706164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.706179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.716103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.716154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.505 [2024-10-15 13:07:31.716167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.505 [2024-10-15 13:07:31.716173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.505 [2024-10-15 13:07:31.716179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.505 [2024-10-15 13:07:31.716193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.505 qpair failed and we were unable to recover it. 00:27:11.505 [2024-10-15 13:07:31.726175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.505 [2024-10-15 13:07:31.726276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.726289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.726296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.726302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.726317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.736172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.736234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.736247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.736254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.736259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.736274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.746192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.746248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.746261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.746268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.746274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.746288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.756188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.756279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.756295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.756301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.756307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.756322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.766283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.766343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.766356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.766362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.766368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.766383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.776266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.776316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.776328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.776335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.776340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.776354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.786286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.786341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.786355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.786361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.786367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.786382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.796355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.796428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.796442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.796448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.796454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.796470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.806384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.806475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.806488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.806494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.806500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.806514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.506 [2024-10-15 13:07:31.816413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.506 [2024-10-15 13:07:31.816475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.506 [2024-10-15 13:07:31.816489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.506 [2024-10-15 13:07:31.816495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.506 [2024-10-15 13:07:31.816501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.506 [2024-10-15 13:07:31.816515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.506 qpair failed and we were unable to recover it. 00:27:11.766 [2024-10-15 13:07:31.826428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.766 [2024-10-15 13:07:31.826514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.826528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.826535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.826541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.826556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.836447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.836499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.836512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.836518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.836524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.836538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.846492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.846547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.846564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.846571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.846577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.846591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.856495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.856549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.856563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.856569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.856575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.856589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.866587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.866689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.866703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.866710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.866716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.866731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.876549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.876606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.876620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.876626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.876632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.876646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.886630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.886686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.886699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.886705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.886711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.886729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.896608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.896663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.896676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.896682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.896688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.896702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.906671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.906771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.906784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.906791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.906797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.906811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.916705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.916764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.916778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.916785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.916790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.916805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.926701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.926783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.926797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.926803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.926809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.926823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.936729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.936797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.936813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.936820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.936825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.936839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.946770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.946840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.946853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.946859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.946865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.946880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.956793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.956848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.767 [2024-10-15 13:07:31.956861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.767 [2024-10-15 13:07:31.956867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.767 [2024-10-15 13:07:31.956873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.767 [2024-10-15 13:07:31.956887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.767 qpair failed and we were unable to recover it. 00:27:11.767 [2024-10-15 13:07:31.966812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.767 [2024-10-15 13:07:31.966880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:31.966894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:31.966900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:31.966905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:31.966920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:31.976824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:31.976880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:31.976893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:31.976899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:31.976908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:31.976923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:31.986906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:31.986965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:31.986978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:31.986985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:31.986991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:31.987005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:31.996924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:31.996978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:31.996991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:31.996998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:31.997004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:31.997018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.006923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.006973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.006986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.006993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.006999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.007013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.017006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.017105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.017118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.017124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.017131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.017145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.026986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.027047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.027060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.027067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.027073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.027087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.037005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.037060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.037073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.037079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.037085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.037100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.047058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.047141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.047154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.047160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.047166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.047180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.057060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.057156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.057169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.057176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.057182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.057196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.067094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.067190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.067202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.067208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.067217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.067232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.077167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.077222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.077236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.077242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.077248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.077263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:11.768 [2024-10-15 13:07:32.087174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.768 [2024-10-15 13:07:32.087263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.768 [2024-10-15 13:07:32.087276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.768 [2024-10-15 13:07:32.087282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.768 [2024-10-15 13:07:32.087288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:11.768 [2024-10-15 13:07:32.087302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.768 qpair failed and we were unable to recover it. 00:27:12.028 [2024-10-15 13:07:32.097205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.028 [2024-10-15 13:07:32.097265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.028 [2024-10-15 13:07:32.097279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.028 [2024-10-15 13:07:32.097286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.028 [2024-10-15 13:07:32.097292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.097306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.107213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.107267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.107281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.107287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.107293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.107307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.117248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.117302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.117315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.117321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.117327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.117342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.127276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.127325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.127338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.127344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.127350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.127364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.137282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.137329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.137342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.137348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.137354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.137368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.147313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.147365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.147378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.147384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.147390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.147404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.157347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.157404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.157418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.157428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.157435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.157449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.167365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.167422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.167435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.167442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.167448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.167463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.177384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.177460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.177473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.177480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.177486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.177500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.187424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.187478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.187492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.187498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.187504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.187519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.197513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.197573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.197586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.197593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.197599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.197619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.207485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.207537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.207552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.207559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.207565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.207580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.217441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.217493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.217506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.217513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.217519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.217533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.227544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.227614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.227628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.227635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.029 [2024-10-15 13:07:32.227641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.029 [2024-10-15 13:07:32.227655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.029 qpair failed and we were unable to recover it. 00:27:12.029 [2024-10-15 13:07:32.237584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.029 [2024-10-15 13:07:32.237658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.029 [2024-10-15 13:07:32.237672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.029 [2024-10-15 13:07:32.237678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.237684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.237699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.247582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.247636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.247650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.247659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.247665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.247680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.257644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.257694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.257706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.257713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.257719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.257733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.267704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.267808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.267821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.267828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.267834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.267848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.277712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.277774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.277788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.277795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.277800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.277815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.287735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.287790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.287803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.287809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.287816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.287832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.297754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.297805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.297819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.297825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.297831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.297846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.307818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.307873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.307887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.307893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.307899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.307913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.317817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.317895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.317909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.317916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.317922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.317936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.327855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.327916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.327929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.327936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.327942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.327957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.337906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.337959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.337975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.337982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.337988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.338002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.030 [2024-10-15 13:07:32.347869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.030 [2024-10-15 13:07:32.347925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.030 [2024-10-15 13:07:32.347938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.030 [2024-10-15 13:07:32.347945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.030 [2024-10-15 13:07:32.347951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.030 [2024-10-15 13:07:32.347966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.030 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.357932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.358014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.358028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.358034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.358040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.290 [2024-10-15 13:07:32.358055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.290 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.367956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.368028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.368043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.368050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.368055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.290 [2024-10-15 13:07:32.368070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.290 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.377965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.378015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.378028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.378034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.378041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.290 [2024-10-15 13:07:32.378058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.290 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.387932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.387991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.388004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.388010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.388016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.290 [2024-10-15 13:07:32.388031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.290 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.398005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.398089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.398102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.398108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.398115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.290 [2024-10-15 13:07:32.398128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.290 qpair failed and we were unable to recover it. 00:27:12.290 [2024-10-15 13:07:32.408030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.290 [2024-10-15 13:07:32.408101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.290 [2024-10-15 13:07:32.408114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.290 [2024-10-15 13:07:32.408121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.290 [2024-10-15 13:07:32.408127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.408141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.418089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.418143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.418157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.418164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.418170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.418184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.428117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.428172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.428189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.428195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.428201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.428215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.438060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.438119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.438133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.438139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.438145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.438160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.448140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.448224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.448237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.448243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.448249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.448263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.458196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.458249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.458263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.458270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.458276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.458290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.468171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.468223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.468236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.468242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.468248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.468265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.478200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.478255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.478268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.478275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.478281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.478295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.488257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.488319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.488333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.488339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.488345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.488360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.498305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.498380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.498394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.498400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.498406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.498421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.508404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.508489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.508502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.508509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.508515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.508529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.518429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.518488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.518501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.518508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.518514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.518528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.528430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.528487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.528501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.528508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.528514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.528528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.538442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.538497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.538511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.291 [2024-10-15 13:07:32.538517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.291 [2024-10-15 13:07:32.538523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.291 [2024-10-15 13:07:32.538537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.291 qpair failed and we were unable to recover it. 00:27:12.291 [2024-10-15 13:07:32.548485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.291 [2024-10-15 13:07:32.548544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.291 [2024-10-15 13:07:32.548558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.548565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.548570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.548584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.558520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.558587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.558604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.558611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.558620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.558636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.568537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.568636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.568649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.568656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.568662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.568679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.578594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.578694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.578707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.578713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.578719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.578734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.588587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.588651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.588665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.588671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.588677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.588691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.598566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.598653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.598668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.598675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.598681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.598696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.292 [2024-10-15 13:07:32.608576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.292 [2024-10-15 13:07:32.608645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.292 [2024-10-15 13:07:32.608659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.292 [2024-10-15 13:07:32.608666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.292 [2024-10-15 13:07:32.608672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.292 [2024-10-15 13:07:32.608688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.292 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.618595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.618653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.618667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.618673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.618680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.618694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.628683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.628760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.628773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.628781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.628786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.628800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.638761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.638815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.638830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.638837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.638843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.638857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.648772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.648822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.648835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.648844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.648850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.648865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.658731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.658782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.658796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.658802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.658807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.658821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.668759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.668814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.668827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.668834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.668839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.668853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.678817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.678871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.678885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.678891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.678897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.678911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.688863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.688966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.688979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.688985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.688991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.689006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.698833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.698884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.698897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.698904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.698909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.698924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.708922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.708985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.708998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.709005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.552 [2024-10-15 13:07:32.709011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.552 [2024-10-15 13:07:32.709025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.552 qpair failed and we were unable to recover it. 00:27:12.552 [2024-10-15 13:07:32.718945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.552 [2024-10-15 13:07:32.719000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.552 [2024-10-15 13:07:32.719013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.552 [2024-10-15 13:07:32.719019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.719025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.719039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.728911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.728960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.728973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.728980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.728986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.729000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.738947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.739009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.739022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.739032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.739038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.739052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.749081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.749138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.749151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.749158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.749164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.749178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.759085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.759155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.759169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.759176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.759181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.759195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.769119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.769171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.769184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.769191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.769197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.769211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.779165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.779218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.779231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.779237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.779243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.779257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.789175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.789275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.789288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.789295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.789300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.789315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.799222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.799283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.799296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.799302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.799308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.799322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.809228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.809281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.809294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.809300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.809306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.809320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.819241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.819327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.819339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.819345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.819351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.819365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.829282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.829349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.829366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.829373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.829379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.829394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.839308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.839362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.839375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.839382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.839388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.553 [2024-10-15 13:07:32.839402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.553 qpair failed and we were unable to recover it. 00:27:12.553 [2024-10-15 13:07:32.849369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.553 [2024-10-15 13:07:32.849429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.553 [2024-10-15 13:07:32.849442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.553 [2024-10-15 13:07:32.849449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.553 [2024-10-15 13:07:32.849454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.554 [2024-10-15 13:07:32.849470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.554 qpair failed and we were unable to recover it. 00:27:12.554 [2024-10-15 13:07:32.859361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.554 [2024-10-15 13:07:32.859407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.554 [2024-10-15 13:07:32.859420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.554 [2024-10-15 13:07:32.859426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.554 [2024-10-15 13:07:32.859432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.554 [2024-10-15 13:07:32.859447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.554 qpair failed and we were unable to recover it. 00:27:12.554 [2024-10-15 13:07:32.869392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.554 [2024-10-15 13:07:32.869450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.554 [2024-10-15 13:07:32.869464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.554 [2024-10-15 13:07:32.869470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.554 [2024-10-15 13:07:32.869476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.554 [2024-10-15 13:07:32.869493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.554 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.879440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.879500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.879514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.879520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.879526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.879540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.889447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.889500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.889514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.889520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.889526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.889540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.899464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.899513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.899527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.899533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.899539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.899553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.909510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.909567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.909581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.909587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.909593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.909611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.919626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.919686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.919703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.919709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.919715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.919730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.929594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.929652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.929665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.929671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.929677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.929692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.939636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.939741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.939754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.939761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.939767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.939781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.949655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.949719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.949732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.949739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.949744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.949759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.959673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.959728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.959742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.959749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.959754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.959775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.969662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.814 [2024-10-15 13:07:32.969731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.814 [2024-10-15 13:07:32.969745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.814 [2024-10-15 13:07:32.969751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.814 [2024-10-15 13:07:32.969757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.814 [2024-10-15 13:07:32.969771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.814 qpair failed and we were unable to recover it. 00:27:12.814 [2024-10-15 13:07:32.979710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:32.979762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:32.979775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:32.979781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:32.979787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:32.979802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:32.989725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:32.989782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:32.989795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:32.989802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:32.989807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:32.989822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:32.999755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:32.999831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:32.999845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:32.999852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:32.999858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:32.999872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.009826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.009880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.009897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.009903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.009909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.009924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.019795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.019847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.019861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.019867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.019873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.019887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.029841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.029918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.029931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.029938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.029943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.029957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.039868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.039920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.039933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.039939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.039945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.039960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.049886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.049940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.049953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.049959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.049968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.049984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.059923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.059973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.059986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.059992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.059998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.060012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.070004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.070059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.070072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.070078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.070084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.070099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.080003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.080056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.080069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.080076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.080082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.080096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.815 [2024-10-15 13:07:33.090006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.815 [2024-10-15 13:07:33.090056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.815 [2024-10-15 13:07:33.090069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.815 [2024-10-15 13:07:33.090075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.815 [2024-10-15 13:07:33.090081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.815 [2024-10-15 13:07:33.090096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.815 qpair failed and we were unable to recover it. 00:27:12.816 [2024-10-15 13:07:33.100044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.816 [2024-10-15 13:07:33.100102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.816 [2024-10-15 13:07:33.100115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.816 [2024-10-15 13:07:33.100122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.816 [2024-10-15 13:07:33.100128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.816 [2024-10-15 13:07:33.100142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.816 qpair failed and we were unable to recover it. 00:27:12.816 [2024-10-15 13:07:33.110067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.816 [2024-10-15 13:07:33.110155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.816 [2024-10-15 13:07:33.110168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.816 [2024-10-15 13:07:33.110174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.816 [2024-10-15 13:07:33.110180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.816 [2024-10-15 13:07:33.110194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.816 qpair failed and we were unable to recover it. 00:27:12.816 [2024-10-15 13:07:33.120099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.816 [2024-10-15 13:07:33.120199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.816 [2024-10-15 13:07:33.120212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.816 [2024-10-15 13:07:33.120218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.816 [2024-10-15 13:07:33.120224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.816 [2024-10-15 13:07:33.120238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.816 qpair failed and we were unable to recover it. 00:27:12.816 [2024-10-15 13:07:33.130125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.816 [2024-10-15 13:07:33.130177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.816 [2024-10-15 13:07:33.130190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.816 [2024-10-15 13:07:33.130196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.816 [2024-10-15 13:07:33.130202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:12.816 [2024-10-15 13:07:33.130216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.816 qpair failed and we were unable to recover it. 00:27:13.075 [2024-10-15 13:07:33.140152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.075 [2024-10-15 13:07:33.140223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.075 [2024-10-15 13:07:33.140237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.075 [2024-10-15 13:07:33.140243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.075 [2024-10-15 13:07:33.140252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.075 [2024-10-15 13:07:33.140267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.075 qpair failed and we were unable to recover it. 00:27:13.075 [2024-10-15 13:07:33.150183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.075 [2024-10-15 13:07:33.150240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.075 [2024-10-15 13:07:33.150253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.075 [2024-10-15 13:07:33.150259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.075 [2024-10-15 13:07:33.150265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.075 [2024-10-15 13:07:33.150280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.075 qpair failed and we were unable to recover it. 00:27:13.075 [2024-10-15 13:07:33.160211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.075 [2024-10-15 13:07:33.160260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.075 [2024-10-15 13:07:33.160275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.075 [2024-10-15 13:07:33.160282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.075 [2024-10-15 13:07:33.160288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.075 [2024-10-15 13:07:33.160302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.075 qpair failed and we were unable to recover it. 00:27:13.075 [2024-10-15 13:07:33.170238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.170290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.170303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.170309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.170315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.170329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.180272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.180323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.180336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.180342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.180348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.180362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.190334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.190400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.190414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.190420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.190425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.190440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.200326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.200378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.200392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.200398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.200404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.200418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.210346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.210398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.210413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.210419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.210425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.210439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.220414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.220469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.220482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.220489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.220494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.220508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.230424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.230491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.230505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.230514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.230520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.230534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.240447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.240503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.240517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.240523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.240529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.240543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.250461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.250514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.250528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.250534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.250540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.250554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.260489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.260543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.260557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.260563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.260569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.260583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.270519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.270611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.270625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.270632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.270637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.270652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.280573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.280679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.280693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.280699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.280705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff120000b90 00:27:13.076 [2024-10-15 13:07:33.280720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.290622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.290711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.290758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.290778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.290795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff114000b90 00:27:13.076 [2024-10-15 13:07:33.290836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.076 qpair failed and we were unable to recover it. 00:27:13.076 [2024-10-15 13:07:33.300633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.076 [2024-10-15 13:07:33.300709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.076 [2024-10-15 13:07:33.300731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.076 [2024-10-15 13:07:33.300743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.076 [2024-10-15 13:07:33.300752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff114000b90 00:27:13.077 [2024-10-15 13:07:33.300776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.310641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.077 [2024-10-15 13:07:33.310708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.077 [2024-10-15 13:07:33.310729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.077 [2024-10-15 13:07:33.310740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.077 [2024-10-15 13:07:33.310750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff114000b90 00:27:13.077 [2024-10-15 13:07:33.310773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.320665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.077 [2024-10-15 13:07:33.320778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.077 [2024-10-15 13:07:33.320843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.077 [2024-10-15 13:07:33.320870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.077 [2024-10-15 13:07:33.320893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff118000b90 00:27:13.077 [2024-10-15 13:07:33.320943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.330669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.077 [2024-10-15 13:07:33.330757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.077 [2024-10-15 13:07:33.330787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.077 [2024-10-15 13:07:33.330803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.077 [2024-10-15 13:07:33.330825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff118000b90 00:27:13.077 [2024-10-15 13:07:33.330859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.340730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.077 [2024-10-15 13:07:33.340831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.077 [2024-10-15 13:07:33.340889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.077 [2024-10-15 13:07:33.340915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.077 [2024-10-15 13:07:33.340938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9a0c60 00:27:13.077 [2024-10-15 13:07:33.340986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.350736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.077 [2024-10-15 13:07:33.350835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.077 [2024-10-15 13:07:33.350868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.077 [2024-10-15 13:07:33.350883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.077 [2024-10-15 13:07:33.350898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9a0c60 00:27:13.077 [2024-10-15 13:07:33.350930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.077 qpair failed and we were unable to recover it. 00:27:13.077 [2024-10-15 13:07:33.351047] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:13.077 A controller has encountered a failure and is being reset. 00:27:13.077 Controller properly reset. 00:27:13.077 Initializing NVMe Controllers 00:27:13.077 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:13.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:13.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:13.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:13.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:13.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:13.077 Initialization complete. Launching workers. 00:27:13.077 Starting thread on core 1 00:27:13.077 Starting thread on core 2 00:27:13.077 Starting thread on core 3 00:27:13.077 Starting thread on core 0 00:27:13.077 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:13.077 00:27:13.077 real 0m11.283s 00:27:13.077 user 0m22.027s 00:27:13.077 sys 0m4.733s 00:27:13.077 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:13.077 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.077 ************************************ 00:27:13.077 END TEST nvmf_target_disconnect_tc2 00:27:13.077 ************************************ 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.336 rmmod nvme_tcp 00:27:13.336 rmmod nvme_fabrics 00:27:13.336 rmmod nvme_keyring 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1373448 ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1373448 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1373448 ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1373448 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1373448 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1373448' 00:27:13.336 killing process with pid 1373448 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1373448 00:27:13.336 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1373448 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.595 13:07:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.499 13:07:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.499 00:27:15.499 real 0m20.073s 00:27:15.499 user 0m49.232s 00:27:15.499 sys 0m9.669s 00:27:15.499 13:07:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.499 13:07:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.499 ************************************ 00:27:15.499 END TEST nvmf_target_disconnect 00:27:15.499 ************************************ 00:27:15.499 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:15.758 00:27:15.758 real 5m51.851s 00:27:15.758 user 10m31.136s 00:27:15.758 sys 1m58.226s 00:27:15.758 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.758 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.758 ************************************ 00:27:15.758 END TEST nvmf_host 00:27:15.758 ************************************ 00:27:15.758 13:07:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:15.758 13:07:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:15.758 13:07:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.758 13:07:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:15.758 13:07:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.759 13:07:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.759 ************************************ 00:27:15.759 START TEST nvmf_target_core_interrupt_mode 00:27:15.759 ************************************ 00:27:15.759 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.759 * Looking for test storage... 00:27:15.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:15.759 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:15.759 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:27:15.759 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.759 --rc genhtml_branch_coverage=1 00:27:15.759 --rc genhtml_function_coverage=1 00:27:15.759 --rc genhtml_legend=1 00:27:15.759 --rc geninfo_all_blocks=1 00:27:15.759 --rc geninfo_unexecuted_blocks=1 00:27:15.759 00:27:15.759 ' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.759 --rc genhtml_branch_coverage=1 00:27:15.759 --rc genhtml_function_coverage=1 00:27:15.759 --rc genhtml_legend=1 00:27:15.759 --rc geninfo_all_blocks=1 00:27:15.759 --rc geninfo_unexecuted_blocks=1 00:27:15.759 00:27:15.759 ' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.759 --rc genhtml_branch_coverage=1 00:27:15.759 --rc genhtml_function_coverage=1 00:27:15.759 --rc genhtml_legend=1 00:27:15.759 --rc geninfo_all_blocks=1 00:27:15.759 --rc geninfo_unexecuted_blocks=1 00:27:15.759 00:27:15.759 ' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.759 --rc genhtml_branch_coverage=1 00:27:15.759 --rc genhtml_function_coverage=1 00:27:15.759 --rc genhtml_legend=1 00:27:15.759 --rc geninfo_all_blocks=1 00:27:15.759 --rc geninfo_unexecuted_blocks=1 00:27:15.759 00:27:15.759 ' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.759 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:16.019 ************************************ 00:27:16.019 START TEST nvmf_abort 00:27:16.019 ************************************ 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:16.019 * Looking for test storage... 00:27:16.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:16.019 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.020 --rc genhtml_branch_coverage=1 00:27:16.020 --rc genhtml_function_coverage=1 00:27:16.020 --rc genhtml_legend=1 00:27:16.020 --rc geninfo_all_blocks=1 00:27:16.020 --rc geninfo_unexecuted_blocks=1 00:27:16.020 00:27:16.020 ' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.020 --rc genhtml_branch_coverage=1 00:27:16.020 --rc genhtml_function_coverage=1 00:27:16.020 --rc genhtml_legend=1 00:27:16.020 --rc geninfo_all_blocks=1 00:27:16.020 --rc geninfo_unexecuted_blocks=1 00:27:16.020 00:27:16.020 ' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.020 --rc genhtml_branch_coverage=1 00:27:16.020 --rc genhtml_function_coverage=1 00:27:16.020 --rc genhtml_legend=1 00:27:16.020 --rc geninfo_all_blocks=1 00:27:16.020 --rc geninfo_unexecuted_blocks=1 00:27:16.020 00:27:16.020 ' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.020 --rc genhtml_branch_coverage=1 00:27:16.020 --rc genhtml_function_coverage=1 00:27:16.020 --rc genhtml_legend=1 00:27:16.020 --rc geninfo_all_blocks=1 00:27:16.020 --rc geninfo_unexecuted_blocks=1 00:27:16.020 00:27:16.020 ' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:16.020 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.021 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.280 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:16.280 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:16.280 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.280 13:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:22.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:22.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:22.848 Found net devices under 0000:86:00.0: cvl_0_0 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:22.848 Found net devices under 0000:86:00.1: cvl_0_1 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:22.848 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.849 13:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:27:22.849 00:27:22.849 --- 10.0.0.2 ping statistics --- 00:27:22.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.849 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:22.849 00:27:22.849 --- 10.0.0.1 ping statistics --- 00:27:22.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.849 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1377980 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1377980 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1377980 ']' 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 [2024-10-15 13:07:42.307118] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:22.849 [2024-10-15 13:07:42.308097] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:27:22.849 [2024-10-15 13:07:42.308134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.849 [2024-10-15 13:07:42.380330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.849 [2024-10-15 13:07:42.422987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.849 [2024-10-15 13:07:42.423020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.849 [2024-10-15 13:07:42.423028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.849 [2024-10-15 13:07:42.423034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.849 [2024-10-15 13:07:42.423039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.849 [2024-10-15 13:07:42.424484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.849 [2024-10-15 13:07:42.424588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.849 [2024-10-15 13:07:42.424590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.849 [2024-10-15 13:07:42.491551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:22.849 [2024-10-15 13:07:42.492527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:22.849 [2024-10-15 13:07:42.492850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:22.849 [2024-10-15 13:07:42.492997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 [2024-10-15 13:07:42.561427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 Malloc0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 Delay0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.849 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 [2024-10-15 13:07:42.661403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.850 13:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:22.850 [2024-10-15 13:07:42.777413] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:24.753 Initializing NVMe Controllers 00:27:24.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:24.753 controller IO queue size 128 less than required 00:27:24.753 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:24.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:24.753 Initialization complete. Launching workers. 00:27:24.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38302 00:27:24.753 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38359, failed to submit 66 00:27:24.753 success 38302, unsuccessful 57, failed 0 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.753 rmmod nvme_tcp 00:27:24.753 rmmod nvme_fabrics 00:27:24.753 rmmod nvme_keyring 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1377980 ']' 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1377980 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1377980 ']' 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1377980 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1377980 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1377980' 00:27:24.753 killing process with pid 1377980 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1377980 00:27:24.753 13:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1377980 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.012 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.915 00:27:26.915 real 0m11.040s 00:27:26.915 user 0m10.118s 00:27:26.915 sys 0m5.641s 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.915 ************************************ 00:27:26.915 END TEST nvmf_abort 00:27:26.915 ************************************ 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:26.915 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:27.174 ************************************ 00:27:27.174 START TEST nvmf_ns_hotplug_stress 00:27:27.174 ************************************ 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:27.174 * Looking for test storage... 00:27:27.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:27.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.174 --rc genhtml_branch_coverage=1 00:27:27.174 --rc genhtml_function_coverage=1 00:27:27.174 --rc genhtml_legend=1 00:27:27.174 --rc geninfo_all_blocks=1 00:27:27.174 --rc geninfo_unexecuted_blocks=1 00:27:27.174 00:27:27.174 ' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:27.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.174 --rc genhtml_branch_coverage=1 00:27:27.174 --rc genhtml_function_coverage=1 00:27:27.174 --rc genhtml_legend=1 00:27:27.174 --rc geninfo_all_blocks=1 00:27:27.174 --rc geninfo_unexecuted_blocks=1 00:27:27.174 00:27:27.174 ' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:27.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.174 --rc genhtml_branch_coverage=1 00:27:27.174 --rc genhtml_function_coverage=1 00:27:27.174 --rc genhtml_legend=1 00:27:27.174 --rc geninfo_all_blocks=1 00:27:27.174 --rc geninfo_unexecuted_blocks=1 00:27:27.174 00:27:27.174 ' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:27.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.174 --rc genhtml_branch_coverage=1 00:27:27.174 --rc genhtml_function_coverage=1 00:27:27.174 --rc genhtml_legend=1 00:27:27.174 --rc geninfo_all_blocks=1 00:27:27.174 --rc geninfo_unexecuted_blocks=1 00:27:27.174 00:27:27.174 ' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:27.174 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.175 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:33.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:33.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:33.742 Found net devices under 0000:86:00.0: cvl_0_0 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:33.742 Found net devices under 0000:86:00.1: cvl_0_1 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.742 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:27:33.743 00:27:33.743 --- 10.0.0.2 ping statistics --- 00:27:33.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.743 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:27:33.743 00:27:33.743 --- 10.0.0.1 ping statistics --- 00:27:33.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.743 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1381975 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1381975 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1381975 ']' 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 [2024-10-15 13:07:53.483293] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:33.743 [2024-10-15 13:07:53.484196] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:27:33.743 [2024-10-15 13:07:53.484232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.743 [2024-10-15 13:07:53.556868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.743 [2024-10-15 13:07:53.596268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.743 [2024-10-15 13:07:53.596306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.743 [2024-10-15 13:07:53.596313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.743 [2024-10-15 13:07:53.596319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.743 [2024-10-15 13:07:53.596324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.743 [2024-10-15 13:07:53.597770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.743 [2024-10-15 13:07:53.597856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.743 [2024-10-15 13:07:53.597856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.743 [2024-10-15 13:07:53.664946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:33.743 [2024-10-15 13:07:53.665826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:33.743 [2024-10-15 13:07:53.666039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:33.743 [2024-10-15 13:07:53.666221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:33.743 [2024-10-15 13:07:53.906645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.743 13:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:34.027 13:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.027 [2024-10-15 13:07:54.303003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.027 13:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.286 13:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:34.545 Malloc0 00:27:34.545 13:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:34.804 Delay0 00:27:34.804 13:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.804 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:35.063 NULL1 00:27:35.063 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:35.322 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1382240 00:27:35.323 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:35.323 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:35.323 13:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.700 Read completed with error (sct=0, sc=11) 00:27:36.700 13:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.700 13:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:36.700 13:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:36.959 true 00:27:36.959 13:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:36.959 13:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.916 13:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.916 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:37.916 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:38.221 true 00:27:38.221 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:38.221 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.536 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.536 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:38.536 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:38.795 true 00:27:38.795 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:38.795 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.732 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.991 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:39.991 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:40.250 true 00:27:40.250 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:40.250 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.250 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.509 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:40.509 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:40.768 true 00:27:40.768 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:40.768 13:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.145 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.145 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:42.145 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:42.145 true 00:27:42.404 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:42.404 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.404 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.663 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:42.663 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:42.922 true 00:27:42.922 13:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:42.922 13:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 13:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.300 13:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:44.300 13:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:44.560 true 00:27:44.560 13:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:44.560 13:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.497 13:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.497 13:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:45.497 13:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:45.757 true 00:27:45.757 13:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:45.757 13:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.016 13:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.275 13:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:46.275 13:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:46.275 true 00:27:46.275 13:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:46.275 13:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.650 13:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.650 13:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:47.650 13:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:47.650 true 00:27:47.650 13:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:47.650 13:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.909 13:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.167 13:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:48.168 13:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:48.426 true 00:27:48.426 13:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:48.426 13:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.362 13:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.621 13:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:49.621 13:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:49.880 true 00:27:49.880 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:49.880 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.814 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.814 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:50.814 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:51.073 true 00:27:51.073 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:51.073 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.332 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.332 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:51.332 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:51.591 true 00:27:51.591 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:51.592 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.971 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:52.971 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:53.231 true 00:27:53.231 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:53.231 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.169 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.169 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:54.169 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:54.429 true 00:27:54.429 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:54.429 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.689 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.689 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:54.689 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:54.947 true 00:27:54.947 13:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:54.947 13:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.883 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.141 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:56.141 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:56.400 true 00:27:56.400 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:56.401 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.338 13:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.338 13:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:57.338 13:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:57.597 true 00:27:57.597 13:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:57.597 13:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.856 13:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.114 13:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:58.114 13:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:58.114 true 00:27:58.114 13:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:58.114 13:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 13:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.493 13:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:59.493 13:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:59.751 true 00:27:59.751 13:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:27:59.751 13:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.688 13:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.945 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:00.945 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:00.946 true 00:28:00.946 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:00.946 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.204 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.462 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:01.462 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:01.719 true 00:28:01.719 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:01.719 13:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.652 13:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.910 13:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:02.910 13:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:03.168 true 00:28:03.168 13:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:03.168 13:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.104 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.104 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:04.104 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:04.363 true 00:28:04.363 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:04.363 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.622 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.622 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:04.881 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:04.881 true 00:28:04.881 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:04.881 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.260 Initializing NVMe Controllers 00:28:06.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.260 Controller IO queue size 128, less than required. 00:28:06.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.260 Controller IO queue size 128, less than required. 00:28:06.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.260 Initialization complete. Launching workers. 00:28:06.260 ======================================================== 00:28:06.260 Latency(us) 00:28:06.260 Device Information : IOPS MiB/s Average min max 00:28:06.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1857.40 0.91 47103.82 2602.12 1013128.18 00:28:06.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17776.60 8.68 7200.07 1083.13 373843.36 00:28:06.260 ======================================================== 00:28:06.260 Total : 19634.00 9.59 10975.01 1083.13 1013128.18 00:28:06.260 00:28:06.260 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.260 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:06.260 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:06.519 true 00:28:06.519 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1382240 00:28:06.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1382240) - No such process 00:28:06.519 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1382240 00:28:06.519 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.778 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:06.778 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:06.778 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:06.778 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:06.778 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.778 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:07.037 null0 00:28:07.037 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.037 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.037 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:07.296 null1 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:07.296 null2 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.296 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:07.555 null3 00:28:07.555 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.555 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.555 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:07.813 null4 00:28:07.813 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.813 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.813 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:07.813 null5 00:28:07.813 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.813 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.813 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:08.072 null6 00:28:08.072 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.072 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.072 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:08.331 null7 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1387709 1387711 1387714 1387717 1387720 1387723 1387726 1387729 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.331 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.590 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.590 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.590 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.590 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.591 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.850 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.850 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.109 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.368 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.627 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.628 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.886 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.886 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.886 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.886 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.886 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.144 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.403 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.662 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.662 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.662 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.662 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.663 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.921 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.922 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.922 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.181 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.439 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.698 13:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.698 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.957 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.215 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.474 rmmod nvme_tcp 00:28:12.474 rmmod nvme_fabrics 00:28:12.474 rmmod nvme_keyring 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1381975 ']' 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1381975 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1381975 ']' 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1381975 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1381975 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1381975' 00:28:12.474 killing process with pid 1381975 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1381975 00:28:12.474 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1381975 00:28:12.735 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:12.735 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:12.735 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.736 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:14.760 00:28:14.760 real 0m47.760s 00:28:14.760 user 2m57.373s 00:28:14.760 sys 0m20.658s 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.760 ************************************ 00:28:14.760 END TEST nvmf_ns_hotplug_stress 00:28:14.760 ************************************ 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.760 ************************************ 00:28:14.760 START TEST nvmf_delete_subsystem 00:28:14.760 ************************************ 00:28:14.760 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:15.020 * Looking for test storage... 00:28:15.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.020 --rc genhtml_branch_coverage=1 00:28:15.020 --rc genhtml_function_coverage=1 00:28:15.020 --rc genhtml_legend=1 00:28:15.020 --rc geninfo_all_blocks=1 00:28:15.020 --rc geninfo_unexecuted_blocks=1 00:28:15.020 00:28:15.020 ' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.020 --rc genhtml_branch_coverage=1 00:28:15.020 --rc genhtml_function_coverage=1 00:28:15.020 --rc genhtml_legend=1 00:28:15.020 --rc geninfo_all_blocks=1 00:28:15.020 --rc geninfo_unexecuted_blocks=1 00:28:15.020 00:28:15.020 ' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.020 --rc genhtml_branch_coverage=1 00:28:15.020 --rc genhtml_function_coverage=1 00:28:15.020 --rc genhtml_legend=1 00:28:15.020 --rc geninfo_all_blocks=1 00:28:15.020 --rc geninfo_unexecuted_blocks=1 00:28:15.020 00:28:15.020 ' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.020 --rc genhtml_branch_coverage=1 00:28:15.020 --rc genhtml_function_coverage=1 00:28:15.020 --rc genhtml_legend=1 00:28:15.020 --rc geninfo_all_blocks=1 00:28:15.020 --rc geninfo_unexecuted_blocks=1 00:28:15.020 00:28:15.020 ' 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.020 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.021 13:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:21.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:21.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.590 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:21.591 Found net devices under 0000:86:00.0: cvl_0_0 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:21.591 Found net devices under 0000:86:00.1: cvl_0_1 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.591 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:28:21.591 00:28:21.591 --- 10.0.0.2 ping statistics --- 00:28:21.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.591 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:28:21.591 00:28:21.591 --- 10.0.0.1 ping statistics --- 00:28:21.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.591 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1391956 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1391956 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1391956 ']' 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.591 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.591 [2024-10-15 13:08:41.256841] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:21.591 [2024-10-15 13:08:41.258217] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:28:21.591 [2024-10-15 13:08:41.258265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.591 [2024-10-15 13:08:41.350682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:21.591 [2024-10-15 13:08:41.391574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.591 [2024-10-15 13:08:41.391617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.591 [2024-10-15 13:08:41.391624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.591 [2024-10-15 13:08:41.391631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.591 [2024-10-15 13:08:41.391635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.592 [2024-10-15 13:08:41.392825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.592 [2024-10-15 13:08:41.392826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.592 [2024-10-15 13:08:41.458954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:21.592 [2024-10-15 13:08:41.459626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:21.592 [2024-10-15 13:08:41.459828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 [2024-10-15 13:08:41.523093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 [2024-10-15 13:08:41.553951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 NULL1 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 Delay0 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1392178 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:21.592 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:21.592 [2024-10-15 13:08:41.653771] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:23.495 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.495 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.495 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 Read completed with error (sct=0, sc=8) 00:28:23.754 starting I/O failed: -6 00:28:23.754 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Write completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 starting I/O failed: -6 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.755 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Write completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 Read completed with error (sct=0, sc=8) 00:28:23.756 starting I/O failed: -6 00:28:23.756 [2024-10-15 13:08:43.856607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b94000c00 is same with the state(6) to be set 00:28:24.692 [2024-10-15 13:08:44.831546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41a70 is same with the state(6) to be set 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 [2024-10-15 13:08:44.855996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40390 is same with the state(6) to be set 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 [2024-10-15 13:08:44.856166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40750 is same with the state(6) to be set 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 [2024-10-15 13:08:44.858542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b9400d7a0 is same with the state(6) to be set 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Write completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 Read completed with error (sct=0, sc=8) 00:28:24.692 [2024-10-15 13:08:44.859043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b9400cfe0 is same with the state(6) to be set 00:28:24.692 Initializing NVMe Controllers 00:28:24.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.692 Controller IO queue size 128, less than required. 00:28:24.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:24.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:24.693 Initialization complete. Launching workers. 00:28:24.693 ======================================================== 00:28:24.693 Latency(us) 00:28:24.693 Device Information : IOPS MiB/s Average min max 00:28:24.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.21 0.09 910690.76 335.77 1007012.46 00:28:24.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.20 0.09 907122.06 331.51 1009827.46 00:28:24.693 ======================================================== 00:28:24.693 Total : 368.40 0.18 908896.76 331.51 1009827.46 00:28:24.693 00:28:24.693 [2024-10-15 13:08:44.859555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41a70 (9): Bad file descriptor 00:28:24.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:24.693 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.693 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:24.693 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1392178 00:28:24.693 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1392178 00:28:25.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1392178) - No such process 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1392178 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1392178 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1392178 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.261 [2024-10-15 13:08:45.389844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1392662 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:25.261 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.261 [2024-10-15 13:08:45.463954] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:25.828 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.828 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:25.828 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.396 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.396 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:26.396 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.655 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.655 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:26.655 13:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.223 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.223 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:27.223 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.793 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.793 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:27.793 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.362 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.362 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:28.362 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.362 Initializing NVMe Controllers 00:28:28.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.362 Controller IO queue size 128, less than required. 00:28:28.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:28.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:28.362 Initialization complete. Launching workers. 00:28:28.362 ======================================================== 00:28:28.362 Latency(us) 00:28:28.362 Device Information : IOPS MiB/s Average min max 00:28:28.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002651.69 1000162.25 1007998.12 00:28:28.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004761.74 1000319.98 1011588.19 00:28:28.362 ======================================================== 00:28:28.362 Total : 256.00 0.12 1003706.72 1000162.25 1011588.19 00:28:28.362 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1392662 00:28:28.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1392662) - No such process 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1392662 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.622 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.882 rmmod nvme_tcp 00:28:28.882 rmmod nvme_fabrics 00:28:28.882 rmmod nvme_keyring 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1391956 ']' 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1391956 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1391956 ']' 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1391956 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:28:28.882 13:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391956 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391956' 00:28:28.882 killing process with pid 1391956 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1391956 00:28:28.882 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1391956 00:28:29.141 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:29.141 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.142 13:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.049 00:28:31.049 real 0m16.198s 00:28:31.049 user 0m26.158s 00:28:31.049 sys 0m6.133s 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.049 ************************************ 00:28:31.049 END TEST nvmf_delete_subsystem 00:28:31.049 ************************************ 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:31.049 ************************************ 00:28:31.049 START TEST nvmf_host_management 00:28:31.049 ************************************ 00:28:31.049 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:31.310 * Looking for test storage... 00:28:31.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.310 --rc genhtml_branch_coverage=1 00:28:31.310 --rc genhtml_function_coverage=1 00:28:31.310 --rc genhtml_legend=1 00:28:31.310 --rc geninfo_all_blocks=1 00:28:31.310 --rc geninfo_unexecuted_blocks=1 00:28:31.310 00:28:31.310 ' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.310 --rc genhtml_branch_coverage=1 00:28:31.310 --rc genhtml_function_coverage=1 00:28:31.310 --rc genhtml_legend=1 00:28:31.310 --rc geninfo_all_blocks=1 00:28:31.310 --rc geninfo_unexecuted_blocks=1 00:28:31.310 00:28:31.310 ' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.310 --rc genhtml_branch_coverage=1 00:28:31.310 --rc genhtml_function_coverage=1 00:28:31.310 --rc genhtml_legend=1 00:28:31.310 --rc geninfo_all_blocks=1 00:28:31.310 --rc geninfo_unexecuted_blocks=1 00:28:31.310 00:28:31.310 ' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.310 --rc genhtml_branch_coverage=1 00:28:31.310 --rc genhtml_function_coverage=1 00:28:31.310 --rc genhtml_legend=1 00:28:31.310 --rc geninfo_all_blocks=1 00:28:31.310 --rc geninfo_unexecuted_blocks=1 00:28:31.310 00:28:31.310 ' 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.310 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.311 13:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.879 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.880 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.880 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:28:37.880 00:28:37.880 --- 10.0.0.2 ping statistics --- 00:28:37.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.880 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:37.880 00:28:37.880 --- 10.0.0.1 ping statistics --- 00:28:37.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.880 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1396832 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1396832 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1396832 ']' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 [2024-10-15 13:08:57.550737] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:37.880 [2024-10-15 13:08:57.551708] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:28:37.880 [2024-10-15 13:08:57.551744] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.880 [2024-10-15 13:08:57.625101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.880 [2024-10-15 13:08:57.668105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.880 [2024-10-15 13:08:57.668142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.880 [2024-10-15 13:08:57.668149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.880 [2024-10-15 13:08:57.668155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.880 [2024-10-15 13:08:57.668160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.880 [2024-10-15 13:08:57.669575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.880 [2024-10-15 13:08:57.669676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.880 [2024-10-15 13:08:57.669781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.880 [2024-10-15 13:08:57.669781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.880 [2024-10-15 13:08:57.737040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:37.880 [2024-10-15 13:08:57.738036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:37.880 [2024-10-15 13:08:57.738264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:37.880 [2024-10-15 13:08:57.738921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:37.880 [2024-10-15 13:08:57.738943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 [2024-10-15 13:08:57.806565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 Malloc0 00:28:37.880 [2024-10-15 13:08:57.894874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1396908 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1396908 /var/tmp/bdevperf.sock 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1396908 ']' 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:37.880 { 00:28:37.880 "params": { 00:28:37.880 "name": "Nvme$subsystem", 00:28:37.880 "trtype": "$TEST_TRANSPORT", 00:28:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.880 "adrfam": "ipv4", 00:28:37.880 "trsvcid": "$NVMF_PORT", 00:28:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.880 "hdgst": ${hdgst:-false}, 00:28:37.880 "ddgst": ${ddgst:-false} 00:28:37.880 }, 00:28:37.880 "method": "bdev_nvme_attach_controller" 00:28:37.880 } 00:28:37.880 EOF 00:28:37.880 )") 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:37.880 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:37.880 "params": { 00:28:37.880 "name": "Nvme0", 00:28:37.880 "trtype": "tcp", 00:28:37.880 "traddr": "10.0.0.2", 00:28:37.880 "adrfam": "ipv4", 00:28:37.880 "trsvcid": "4420", 00:28:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.880 "hdgst": false, 00:28:37.880 "ddgst": false 00:28:37.880 }, 00:28:37.880 "method": "bdev_nvme_attach_controller" 00:28:37.880 }' 00:28:37.880 [2024-10-15 13:08:57.991795] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:28:37.880 [2024-10-15 13:08:57.991847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396908 ] 00:28:37.880 [2024-10-15 13:08:58.060660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.880 [2024-10-15 13:08:58.101655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.139 Running I/O for 10 seconds... 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:38.139 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.399 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.661 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.661 [2024-10-15 13:08:58.742302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.661 [2024-10-15 13:08:58.742343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.661 [2024-10-15 13:08:58.742351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.661 [2024-10-15 13:08:58.742357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.661 [2024-10-15 13:08:58.742363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13caf80 is same with the state(6) to be set 00:28:38.662 [2024-10-15 13:08:58.742796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.742992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.742999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.743011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.743017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.662 [2024-10-15 13:08:58.743025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-10-15 13:08:58.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.663 [2024-10-15 13:08:58.743658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-10-15 13:08:58.743665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.664 [2024-10-15 13:08:58.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.743790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8850 is same with the state(6) to be set 00:28:38.664 [2024-10-15 13:08:58.743840] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24f8850 was disconnected and freed. reset controller. 00:28:38.664 [2024-10-15 13:08:58.744760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.664 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:38.664 00:28:38.664 Latency(us) 00:28:38.664 [2024-10-15T11:08:58.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.664 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.664 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:38.664 Verification LBA range: start 0x0 length 0x400 00:28:38.664 Nvme0n1 : 0.41 1887.64 117.98 157.30 0.00 30477.12 3885.35 27213.04 00:28:38.664 [2024-10-15T11:08:58.983Z] =================================================================================================================== 00:28:38.664 [2024-10-15T11:08:58.983Z] Total : 1887.64 117.98 157.30 0.00 30477.12 3885.35 27213.04 00:28:38.664 [2024-10-15 13:08:58.747281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.664 [2024-10-15 13:08:58.747303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df5c0 (9): Bad file descriptor 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.664 [2024-10-15 13:08:58.748240] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.664 [2024-10-15 13:08:58.748303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:38.664 [2024-10-15 13:08:58.748327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.664 [2024-10-15 13:08:58.748341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:38.664 [2024-10-15 13:08:58.748349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:38.664 [2024-10-15 13:08:58.748356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.664 [2024-10-15 13:08:58.748362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22df5c0 00:28:38.664 [2024-10-15 13:08:58.748381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df5c0 (9): Bad file descriptor 00:28:38.664 [2024-10-15 13:08:58.748392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.664 [2024-10-15 13:08:58.748399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.664 [2024-10-15 13:08:58.748410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.664 [2024-10-15 13:08:58.748421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.664 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1396908 00:28:39.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1396908) - No such process 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.602 { 00:28:39.602 "params": { 00:28:39.602 "name": "Nvme$subsystem", 00:28:39.602 "trtype": "$TEST_TRANSPORT", 00:28:39.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.602 "adrfam": "ipv4", 00:28:39.602 "trsvcid": "$NVMF_PORT", 00:28:39.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.602 "hdgst": ${hdgst:-false}, 00:28:39.602 "ddgst": ${ddgst:-false} 00:28:39.602 }, 00:28:39.602 "method": "bdev_nvme_attach_controller" 00:28:39.602 } 00:28:39.602 EOF 00:28:39.602 )") 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:39.602 13:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:39.602 "params": { 00:28:39.602 "name": "Nvme0", 00:28:39.602 "trtype": "tcp", 00:28:39.602 "traddr": "10.0.0.2", 00:28:39.602 "adrfam": "ipv4", 00:28:39.602 "trsvcid": "4420", 00:28:39.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:39.602 "hdgst": false, 00:28:39.602 "ddgst": false 00:28:39.602 }, 00:28:39.602 "method": "bdev_nvme_attach_controller" 00:28:39.602 }' 00:28:39.602 [2024-10-15 13:08:59.813016] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:28:39.602 [2024-10-15 13:08:59.813065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397158 ] 00:28:39.602 [2024-10-15 13:08:59.880178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.603 [2024-10-15 13:08:59.919020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.170 Running I/O for 1 seconds... 00:28:41.105 1984.00 IOPS, 124.00 MiB/s 00:28:41.105 Latency(us) 00:28:41.105 [2024-10-15T11:09:01.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.105 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.105 Verification LBA range: start 0x0 length 0x400 00:28:41.105 Nvme0n1 : 1.01 2037.46 127.34 0.00 0.00 30924.34 6366.35 26963.38 00:28:41.105 [2024-10-15T11:09:01.424Z] =================================================================================================================== 00:28:41.105 [2024-10-15T11:09:01.424Z] Total : 2037.46 127.34 0.00 0.00 30924.34 6366.35 26963.38 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.105 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.105 rmmod nvme_tcp 00:28:41.105 rmmod nvme_fabrics 00:28:41.105 rmmod nvme_keyring 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1396832 ']' 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1396832 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1396832 ']' 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1396832 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1396832 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1396832' 00:28:41.364 killing process with pid 1396832 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1396832 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1396832 00:28:41.364 [2024-10-15 13:09:01.659590] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:41.364 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.623 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.528 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:43.529 00:28:43.529 real 0m12.407s 00:28:43.529 user 0m18.153s 00:28:43.529 sys 0m6.294s 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:43.529 ************************************ 00:28:43.529 END TEST nvmf_host_management 00:28:43.529 ************************************ 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.529 ************************************ 00:28:43.529 START TEST nvmf_lvol 00:28:43.529 ************************************ 00:28:43.529 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:43.789 * Looking for test storage... 00:28:43.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:43.789 13:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.789 --rc genhtml_branch_coverage=1 00:28:43.789 --rc genhtml_function_coverage=1 00:28:43.789 --rc genhtml_legend=1 00:28:43.789 --rc geninfo_all_blocks=1 00:28:43.789 --rc geninfo_unexecuted_blocks=1 00:28:43.789 00:28:43.789 ' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.789 --rc genhtml_branch_coverage=1 00:28:43.789 --rc genhtml_function_coverage=1 00:28:43.789 --rc genhtml_legend=1 00:28:43.789 --rc geninfo_all_blocks=1 00:28:43.789 --rc geninfo_unexecuted_blocks=1 00:28:43.789 00:28:43.789 ' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.789 --rc genhtml_branch_coverage=1 00:28:43.789 --rc genhtml_function_coverage=1 00:28:43.789 --rc genhtml_legend=1 00:28:43.789 --rc geninfo_all_blocks=1 00:28:43.789 --rc geninfo_unexecuted_blocks=1 00:28:43.789 00:28:43.789 ' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.789 --rc genhtml_branch_coverage=1 00:28:43.789 --rc genhtml_function_coverage=1 00:28:43.789 --rc genhtml_legend=1 00:28:43.789 --rc geninfo_all_blocks=1 00:28:43.789 --rc geninfo_unexecuted_blocks=1 00:28:43.789 00:28:43.789 ' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.789 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.790 13:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:50.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:50.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:50.368 Found net devices under 0000:86:00.0: cvl_0_0 00:28:50.368 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:50.369 Found net devices under 0000:86:00.1: cvl_0_1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:28:50.369 00:28:50.369 --- 10.0.0.2 ping statistics --- 00:28:50.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.369 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:50.369 00:28:50.369 --- 10.0.0.1 ping statistics --- 00:28:50.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.369 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1401418 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1401418 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1401418 ']' 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:50.369 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:50.369 [2024-10-15 13:09:10.042359] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:50.369 [2024-10-15 13:09:10.043355] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:28:50.369 [2024-10-15 13:09:10.043390] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.369 [2024-10-15 13:09:10.105519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.369 [2024-10-15 13:09:10.150352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.369 [2024-10-15 13:09:10.150385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.369 [2024-10-15 13:09:10.150392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.369 [2024-10-15 13:09:10.150399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.369 [2024-10-15 13:09:10.150407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.369 [2024-10-15 13:09:10.151675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.369 [2024-10-15 13:09:10.151702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.369 [2024-10-15 13:09:10.151701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.369 [2024-10-15 13:09:10.219347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.369 [2024-10-15 13:09:10.220181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:50.369 [2024-10-15 13:09:10.220277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:50.369 [2024-10-15 13:09:10.220468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:50.369 [2024-10-15 13:09:10.468428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.369 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:50.629 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:50.629 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:50.888 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:50.888 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:50.888 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:51.147 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=73c7a636-db7a-45c4-8a78-e6674a471d59 00:28:51.147 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73c7a636-db7a-45c4-8a78-e6674a471d59 lvol 20 00:28:51.405 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ea0263db-bda1-4891-b8e5-ecd213b98919 00:28:51.405 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:51.665 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea0263db-bda1-4891-b8e5-ecd213b98919 00:28:51.665 13:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.923 [2024-10-15 13:09:12.120379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.923 13:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.182 13:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1401916 00:28:52.182 13:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:52.182 13:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:53.118 13:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ea0263db-bda1-4891-b8e5-ecd213b98919 MY_SNAPSHOT 00:28:53.376 13:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=873f9725-f531-4f31-b3e9-c0b74938b108 00:28:53.376 13:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ea0263db-bda1-4891-b8e5-ecd213b98919 30 00:28:53.635 13:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 873f9725-f531-4f31-b3e9-c0b74938b108 MY_CLONE 00:28:53.894 13:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=640ec372-1ab8-4ce1-870b-0d41f1b2e5eb 00:28:53.894 13:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 640ec372-1ab8-4ce1-870b-0d41f1b2e5eb 00:28:54.462 13:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1401916 00:29:02.579 Initializing NVMe Controllers 00:29:02.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:02.579 Controller IO queue size 128, less than required. 00:29:02.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:02.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:02.579 Initialization complete. Launching workers. 00:29:02.579 ======================================================== 00:29:02.579 Latency(us) 00:29:02.579 Device Information : IOPS MiB/s Average min max 00:29:02.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12613.40 49.27 10148.83 1048.99 50217.32 00:29:02.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12503.80 48.84 10237.21 4626.02 44143.11 00:29:02.579 ======================================================== 00:29:02.579 Total : 25117.20 98.11 10192.83 1048.99 50217.32 00:29:02.579 00:29:02.579 13:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.579 13:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea0263db-bda1-4891-b8e5-ecd213b98919 00:29:02.837 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73c7a636-db7a-45c4-8a78-e6674a471d59 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.096 rmmod nvme_tcp 00:29:03.096 rmmod nvme_fabrics 00:29:03.096 rmmod nvme_keyring 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1401418 ']' 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1401418 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1401418 ']' 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1401418 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.096 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1401418 00:29:03.355 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1401418' 00:29:03.356 killing process with pid 1401418 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1401418 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1401418 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.356 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.893 00:29:05.893 real 0m21.855s 00:29:05.893 user 0m55.477s 00:29:05.893 sys 0m9.871s 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:05.893 ************************************ 00:29:05.893 END TEST nvmf_lvol 00:29:05.893 ************************************ 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.893 ************************************ 00:29:05.893 START TEST nvmf_lvs_grow 00:29:05.893 ************************************ 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.893 * Looking for test storage... 00:29:05.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.893 --rc genhtml_branch_coverage=1 00:29:05.893 --rc genhtml_function_coverage=1 00:29:05.893 --rc genhtml_legend=1 00:29:05.893 --rc geninfo_all_blocks=1 00:29:05.893 --rc geninfo_unexecuted_blocks=1 00:29:05.893 00:29:05.893 ' 00:29:05.893 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.894 --rc genhtml_branch_coverage=1 00:29:05.894 --rc genhtml_function_coverage=1 00:29:05.894 --rc genhtml_legend=1 00:29:05.894 --rc geninfo_all_blocks=1 00:29:05.894 --rc geninfo_unexecuted_blocks=1 00:29:05.894 00:29:05.894 ' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.894 --rc genhtml_branch_coverage=1 00:29:05.894 --rc genhtml_function_coverage=1 00:29:05.894 --rc genhtml_legend=1 00:29:05.894 --rc geninfo_all_blocks=1 00:29:05.894 --rc geninfo_unexecuted_blocks=1 00:29:05.894 00:29:05.894 ' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.894 --rc genhtml_branch_coverage=1 00:29:05.894 --rc genhtml_function_coverage=1 00:29:05.894 --rc genhtml_legend=1 00:29:05.894 --rc geninfo_all_blocks=1 00:29:05.894 --rc geninfo_unexecuted_blocks=1 00:29:05.894 00:29:05.894 ' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.894 13:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.470 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.470 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.471 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.471 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:29:12.471 00:29:12.471 --- 10.0.0.2 ping statistics --- 00:29:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.471 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:12.471 00:29:12.471 --- 10.0.0.1 ping statistics --- 00:29:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.471 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1407147 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1407147 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1407147 ']' 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.471 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.471 [2024-10-15 13:09:31.963293] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.471 [2024-10-15 13:09:31.964204] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:12.471 [2024-10-15 13:09:31.964237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.471 [2024-10-15 13:09:32.034624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.471 [2024-10-15 13:09:32.073716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.471 [2024-10-15 13:09:32.073751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.471 [2024-10-15 13:09:32.073758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.471 [2024-10-15 13:09:32.073764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.471 [2024-10-15 13:09:32.073769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.471 [2024-10-15 13:09:32.074321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.471 [2024-10-15 13:09:32.140835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.471 [2024-10-15 13:09:32.141067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:12.471 [2024-10-15 13:09:32.382978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.471 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 ************************************ 00:29:12.472 START TEST lvs_grow_clean 00:29:12.472 ************************************ 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:12.472 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:12.731 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:12.731 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:12.731 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a2a2099-8764-4a8d-a3db-863a90ae937b lvol 150 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef1b02ad-15f4-40b2-b070-f560fec104bd 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.990 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:13.249 [2024-10-15 13:09:33.422707] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:13.249 [2024-10-15 13:09:33.422823] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:13.249 true 00:29:13.249 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:13.249 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:13.509 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:13.509 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:13.768 13:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef1b02ad-15f4-40b2-b070-f560fec104bd 00:29:13.768 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.026 [2024-10-15 13:09:34.171193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.026 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1407558 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1407558 /var/tmp/bdevperf.sock 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1407558 ']' 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.285 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.285 [2024-10-15 13:09:34.418721] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:14.285 [2024-10-15 13:09:34.418769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407558 ] 00:29:14.285 [2024-10-15 13:09:34.487237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.285 [2024-10-15 13:09:34.529086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.543 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:14.543 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:29:14.543 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:14.803 Nvme0n1 00:29:14.803 13:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:15.061 [ 00:29:15.061 { 00:29:15.061 "name": "Nvme0n1", 00:29:15.061 "aliases": [ 00:29:15.061 "ef1b02ad-15f4-40b2-b070-f560fec104bd" 00:29:15.061 ], 00:29:15.061 "product_name": "NVMe disk", 00:29:15.061 "block_size": 4096, 00:29:15.061 "num_blocks": 38912, 00:29:15.061 "uuid": "ef1b02ad-15f4-40b2-b070-f560fec104bd", 00:29:15.061 "numa_id": 1, 00:29:15.061 "assigned_rate_limits": { 00:29:15.061 "rw_ios_per_sec": 0, 00:29:15.061 "rw_mbytes_per_sec": 0, 00:29:15.061 "r_mbytes_per_sec": 0, 00:29:15.061 "w_mbytes_per_sec": 0 00:29:15.061 }, 00:29:15.061 "claimed": false, 00:29:15.061 "zoned": false, 00:29:15.061 "supported_io_types": { 00:29:15.061 "read": true, 00:29:15.061 "write": true, 00:29:15.061 "unmap": true, 00:29:15.061 "flush": true, 00:29:15.061 "reset": true, 00:29:15.061 "nvme_admin": true, 00:29:15.061 "nvme_io": true, 00:29:15.061 "nvme_io_md": false, 00:29:15.061 "write_zeroes": true, 00:29:15.061 "zcopy": false, 00:29:15.061 "get_zone_info": false, 00:29:15.061 "zone_management": false, 00:29:15.061 "zone_append": false, 00:29:15.061 "compare": true, 00:29:15.061 "compare_and_write": true, 00:29:15.061 "abort": true, 00:29:15.061 "seek_hole": false, 00:29:15.061 "seek_data": false, 00:29:15.061 "copy": true, 00:29:15.061 "nvme_iov_md": false 00:29:15.061 }, 00:29:15.061 "memory_domains": [ 00:29:15.061 { 00:29:15.061 "dma_device_id": "system", 00:29:15.061 "dma_device_type": 1 00:29:15.061 } 00:29:15.061 ], 00:29:15.061 "driver_specific": { 00:29:15.061 "nvme": [ 00:29:15.061 { 00:29:15.061 "trid": { 00:29:15.061 "trtype": "TCP", 00:29:15.061 "adrfam": "IPv4", 00:29:15.061 "traddr": "10.0.0.2", 00:29:15.061 "trsvcid": "4420", 00:29:15.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:15.061 }, 00:29:15.061 "ctrlr_data": { 00:29:15.061 "cntlid": 1, 00:29:15.061 "vendor_id": "0x8086", 00:29:15.061 "model_number": "SPDK bdev Controller", 00:29:15.061 "serial_number": "SPDK0", 00:29:15.061 "firmware_revision": "25.01", 00:29:15.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.061 "oacs": { 00:29:15.061 "security": 0, 00:29:15.061 "format": 0, 00:29:15.061 "firmware": 0, 00:29:15.061 "ns_manage": 0 00:29:15.061 }, 00:29:15.061 "multi_ctrlr": true, 00:29:15.061 "ana_reporting": false 00:29:15.061 }, 00:29:15.061 "vs": { 00:29:15.061 "nvme_version": "1.3" 00:29:15.061 }, 00:29:15.061 "ns_data": { 00:29:15.061 "id": 1, 00:29:15.061 "can_share": true 00:29:15.061 } 00:29:15.061 } 00:29:15.062 ], 00:29:15.062 "mp_policy": "active_passive" 00:29:15.062 } 00:29:15.062 } 00:29:15.062 ] 00:29:15.062 13:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1407787 00:29:15.062 13:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:15.062 13:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:15.062 Running I/O for 10 seconds... 00:29:15.998 Latency(us) 00:29:15.998 [2024-10-15T11:09:36.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.998 Nvme0n1 : 1.00 22972.00 89.73 0.00 0.00 0.00 0.00 0.00 00:29:15.998 [2024-10-15T11:09:36.317Z] =================================================================================================================== 00:29:15.998 [2024-10-15T11:09:36.317Z] Total : 22972.00 89.73 0.00 0.00 0.00 0.00 0.00 00:29:15.998 00:29:16.936 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:17.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.219 Nvme0n1 : 2.00 23336.50 91.16 0.00 0.00 0.00 0.00 0.00 00:29:17.219 [2024-10-15T11:09:37.538Z] =================================================================================================================== 00:29:17.219 [2024-10-15T11:09:37.538Z] Total : 23336.50 91.16 0.00 0.00 0.00 0.00 0.00 00:29:17.219 00:29:17.219 true 00:29:17.219 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:17.219 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:17.539 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:17.539 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:17.539 13:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1407787 00:29:18.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.169 Nvme0n1 : 3.00 23374.33 91.31 0.00 0.00 0.00 0.00 0.00 00:29:18.169 [2024-10-15T11:09:38.488Z] =================================================================================================================== 00:29:18.169 [2024-10-15T11:09:38.488Z] Total : 23374.33 91.31 0.00 0.00 0.00 0.00 0.00 00:29:18.169 00:29:19.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.106 Nvme0n1 : 4.00 23453.75 91.62 0.00 0.00 0.00 0.00 0.00 00:29:19.106 [2024-10-15T11:09:39.425Z] =================================================================================================================== 00:29:19.106 [2024-10-15T11:09:39.425Z] Total : 23453.75 91.62 0.00 0.00 0.00 0.00 0.00 00:29:19.106 00:29:20.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.043 Nvme0n1 : 5.00 23467.00 91.67 0.00 0.00 0.00 0.00 0.00 00:29:20.043 [2024-10-15T11:09:40.362Z] =================================================================================================================== 00:29:20.043 [2024-10-15T11:09:40.362Z] Total : 23467.00 91.67 0.00 0.00 0.00 0.00 0.00 00:29:20.043 00:29:21.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.421 Nvme0n1 : 6.00 23517.50 91.87 0.00 0.00 0.00 0.00 0.00 00:29:21.421 [2024-10-15T11:09:41.740Z] =================================================================================================================== 00:29:21.421 [2024-10-15T11:09:41.740Z] Total : 23517.50 91.87 0.00 0.00 0.00 0.00 0.00 00:29:21.421 00:29:22.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.357 Nvme0n1 : 7.00 23564.14 92.05 0.00 0.00 0.00 0.00 0.00 00:29:22.357 [2024-10-15T11:09:42.676Z] =================================================================================================================== 00:29:22.357 [2024-10-15T11:09:42.676Z] Total : 23564.14 92.05 0.00 0.00 0.00 0.00 0.00 00:29:22.357 00:29:23.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.293 Nvme0n1 : 8.00 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:29:23.293 [2024-10-15T11:09:43.612Z] =================================================================================================================== 00:29:23.293 [2024-10-15T11:09:43.612Z] Total : 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:29:23.293 00:29:24.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.229 Nvme0n1 : 9.00 23629.22 92.30 0.00 0.00 0.00 0.00 0.00 00:29:24.229 [2024-10-15T11:09:44.548Z] =================================================================================================================== 00:29:24.229 [2024-10-15T11:09:44.548Z] Total : 23629.22 92.30 0.00 0.00 0.00 0.00 0.00 00:29:24.229 00:29:25.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.166 Nvme0n1 : 10.00 23654.80 92.40 0.00 0.00 0.00 0.00 0.00 00:29:25.166 [2024-10-15T11:09:45.485Z] =================================================================================================================== 00:29:25.166 [2024-10-15T11:09:45.485Z] Total : 23654.80 92.40 0.00 0.00 0.00 0.00 0.00 00:29:25.166 00:29:25.166 00:29:25.166 Latency(us) 00:29:25.166 [2024-10-15T11:09:45.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.167 Nvme0n1 : 10.00 23656.03 92.41 0.00 0.00 5407.49 3120.76 26838.55 00:29:25.167 [2024-10-15T11:09:45.486Z] =================================================================================================================== 00:29:25.167 [2024-10-15T11:09:45.486Z] Total : 23656.03 92.41 0.00 0.00 5407.49 3120.76 26838.55 00:29:25.167 { 00:29:25.167 "results": [ 00:29:25.167 { 00:29:25.167 "job": "Nvme0n1", 00:29:25.167 "core_mask": "0x2", 00:29:25.167 "workload": "randwrite", 00:29:25.167 "status": "finished", 00:29:25.167 "queue_depth": 128, 00:29:25.167 "io_size": 4096, 00:29:25.167 "runtime": 10.004893, 00:29:25.167 "iops": 23656.025106915185, 00:29:25.167 "mibps": 92.40634807388744, 00:29:25.167 "io_failed": 0, 00:29:25.167 "io_timeout": 0, 00:29:25.167 "avg_latency_us": 5407.491039532445, 00:29:25.167 "min_latency_us": 3120.7619047619046, 00:29:25.167 "max_latency_us": 26838.55238095238 00:29:25.167 } 00:29:25.167 ], 00:29:25.167 "core_count": 1 00:29:25.167 } 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1407558 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1407558 ']' 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1407558 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1407558 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1407558' 00:29:25.167 killing process with pid 1407558 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1407558 00:29:25.167 Received shutdown signal, test time was about 10.000000 seconds 00:29:25.167 00:29:25.167 Latency(us) 00:29:25.167 [2024-10-15T11:09:45.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.167 [2024-10-15T11:09:45.486Z] =================================================================================================================== 00:29:25.167 [2024-10-15T11:09:45.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.167 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1407558 00:29:25.426 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.686 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:25.686 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:25.686 13:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:25.945 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:25.945 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:25.945 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:26.204 [2024-10-15 13:09:46.342774] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:26.204 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:26.463 request: 00:29:26.463 { 00:29:26.463 "uuid": "8a2a2099-8764-4a8d-a3db-863a90ae937b", 00:29:26.463 "method": "bdev_lvol_get_lvstores", 00:29:26.463 "req_id": 1 00:29:26.463 } 00:29:26.463 Got JSON-RPC error response 00:29:26.463 response: 00:29:26.463 { 00:29:26.463 "code": -19, 00:29:26.463 "message": "No such device" 00:29:26.463 } 00:29:26.463 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:26.463 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:26.464 aio_bdev 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef1b02ad-15f4-40b2-b070-f560fec104bd 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ef1b02ad-15f4-40b2-b070-f560fec104bd 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:26.464 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:26.723 13:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef1b02ad-15f4-40b2-b070-f560fec104bd -t 2000 00:29:26.982 [ 00:29:26.982 { 00:29:26.982 "name": "ef1b02ad-15f4-40b2-b070-f560fec104bd", 00:29:26.982 "aliases": [ 00:29:26.982 "lvs/lvol" 00:29:26.982 ], 00:29:26.982 "product_name": "Logical Volume", 00:29:26.982 "block_size": 4096, 00:29:26.982 "num_blocks": 38912, 00:29:26.982 "uuid": "ef1b02ad-15f4-40b2-b070-f560fec104bd", 00:29:26.982 "assigned_rate_limits": { 00:29:26.982 "rw_ios_per_sec": 0, 00:29:26.982 "rw_mbytes_per_sec": 0, 00:29:26.982 "r_mbytes_per_sec": 0, 00:29:26.982 "w_mbytes_per_sec": 0 00:29:26.982 }, 00:29:26.982 "claimed": false, 00:29:26.982 "zoned": false, 00:29:26.982 "supported_io_types": { 00:29:26.982 "read": true, 00:29:26.982 "write": true, 00:29:26.982 "unmap": true, 00:29:26.982 "flush": false, 00:29:26.982 "reset": true, 00:29:26.982 "nvme_admin": false, 00:29:26.982 "nvme_io": false, 00:29:26.982 "nvme_io_md": false, 00:29:26.982 "write_zeroes": true, 00:29:26.982 "zcopy": false, 00:29:26.982 "get_zone_info": false, 00:29:26.982 "zone_management": false, 00:29:26.982 "zone_append": false, 00:29:26.982 "compare": false, 00:29:26.982 "compare_and_write": false, 00:29:26.982 "abort": false, 00:29:26.982 "seek_hole": true, 00:29:26.982 "seek_data": true, 00:29:26.982 "copy": false, 00:29:26.982 "nvme_iov_md": false 00:29:26.982 }, 00:29:26.982 "driver_specific": { 00:29:26.982 "lvol": { 00:29:26.983 "lvol_store_uuid": "8a2a2099-8764-4a8d-a3db-863a90ae937b", 00:29:26.983 "base_bdev": "aio_bdev", 00:29:26.983 "thin_provision": false, 00:29:26.983 "num_allocated_clusters": 38, 00:29:26.983 "snapshot": false, 00:29:26.983 "clone": false, 00:29:26.983 "esnap_clone": false 00:29:26.983 } 00:29:26.983 } 00:29:26.983 } 00:29:26.983 ] 00:29:26.983 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:29:26.983 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:26.983 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:27.242 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:27.242 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:27.242 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:27.242 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:27.242 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef1b02ad-15f4-40b2-b070-f560fec104bd 00:29:27.501 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a2a2099-8764-4a8d-a3db-863a90ae937b 00:29:27.760 13:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.019 00:29:28.019 real 0m15.682s 00:29:28.019 user 0m15.144s 00:29:28.019 sys 0m1.511s 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.019 ************************************ 00:29:28.019 END TEST lvs_grow_clean 00:29:28.019 ************************************ 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.019 ************************************ 00:29:28.019 START TEST lvs_grow_dirty 00:29:28.019 ************************************ 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.019 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:28.278 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:28.278 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:28.536 13:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d lvol 150 00:29:28.795 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:28.795 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.795 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:29.054 [2024-10-15 13:09:49.178735] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:29.054 [2024-10-15 13:09:49.178867] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:29.054 true 00:29:29.054 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:29.054 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:29.313 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:29.313 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:29.313 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:29.572 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.831 [2024-10-15 13:09:49.939174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.831 13:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1410139 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1410139 /var/tmp/bdevperf.sock 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1410139 ']' 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:30.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.091 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:30.091 [2024-10-15 13:09:50.204503] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:30.091 [2024-10-15 13:09:50.204550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410139 ] 00:29:30.091 [2024-10-15 13:09:50.271007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.092 [2024-10-15 13:09:50.313000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.092 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.092 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:30.092 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:30.660 Nvme0n1 00:29:30.660 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:30.660 [ 00:29:30.660 { 00:29:30.660 "name": "Nvme0n1", 00:29:30.660 "aliases": [ 00:29:30.660 "bce8fe8e-aba3-4ed2-b732-94b33e300b52" 00:29:30.660 ], 00:29:30.660 "product_name": "NVMe disk", 00:29:30.660 "block_size": 4096, 00:29:30.660 "num_blocks": 38912, 00:29:30.660 "uuid": "bce8fe8e-aba3-4ed2-b732-94b33e300b52", 00:29:30.660 "numa_id": 1, 00:29:30.660 "assigned_rate_limits": { 00:29:30.660 "rw_ios_per_sec": 0, 00:29:30.660 "rw_mbytes_per_sec": 0, 00:29:30.660 "r_mbytes_per_sec": 0, 00:29:30.660 "w_mbytes_per_sec": 0 00:29:30.660 }, 00:29:30.660 "claimed": false, 00:29:30.660 "zoned": false, 00:29:30.660 "supported_io_types": { 00:29:30.660 "read": true, 00:29:30.660 "write": true, 00:29:30.660 "unmap": true, 00:29:30.660 "flush": true, 00:29:30.660 "reset": true, 00:29:30.660 "nvme_admin": true, 00:29:30.660 "nvme_io": true, 00:29:30.660 "nvme_io_md": false, 00:29:30.660 "write_zeroes": true, 00:29:30.660 "zcopy": false, 00:29:30.660 "get_zone_info": false, 00:29:30.660 "zone_management": false, 00:29:30.660 "zone_append": false, 00:29:30.660 "compare": true, 00:29:30.660 "compare_and_write": true, 00:29:30.660 "abort": true, 00:29:30.660 "seek_hole": false, 00:29:30.660 "seek_data": false, 00:29:30.660 "copy": true, 00:29:30.660 "nvme_iov_md": false 00:29:30.660 }, 00:29:30.660 "memory_domains": [ 00:29:30.660 { 00:29:30.660 "dma_device_id": "system", 00:29:30.660 "dma_device_type": 1 00:29:30.660 } 00:29:30.660 ], 00:29:30.660 "driver_specific": { 00:29:30.660 "nvme": [ 00:29:30.660 { 00:29:30.660 "trid": { 00:29:30.660 "trtype": "TCP", 00:29:30.660 "adrfam": "IPv4", 00:29:30.660 "traddr": "10.0.0.2", 00:29:30.660 "trsvcid": "4420", 00:29:30.660 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:30.660 }, 00:29:30.660 "ctrlr_data": { 00:29:30.660 "cntlid": 1, 00:29:30.660 "vendor_id": "0x8086", 00:29:30.660 "model_number": "SPDK bdev Controller", 00:29:30.660 "serial_number": "SPDK0", 00:29:30.660 "firmware_revision": "25.01", 00:29:30.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.660 "oacs": { 00:29:30.660 "security": 0, 00:29:30.660 "format": 0, 00:29:30.660 "firmware": 0, 00:29:30.660 "ns_manage": 0 00:29:30.660 }, 00:29:30.660 "multi_ctrlr": true, 00:29:30.660 "ana_reporting": false 00:29:30.660 }, 00:29:30.660 "vs": { 00:29:30.660 "nvme_version": "1.3" 00:29:30.660 }, 00:29:30.660 "ns_data": { 00:29:30.660 "id": 1, 00:29:30.660 "can_share": true 00:29:30.660 } 00:29:30.660 } 00:29:30.660 ], 00:29:30.660 "mp_policy": "active_passive" 00:29:30.660 } 00:29:30.660 } 00:29:30.660 ] 00:29:30.920 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1410370 00:29:30.920 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:30.920 13:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:30.920 Running I/O for 10 seconds... 00:29:31.858 Latency(us) 00:29:31.858 [2024-10-15T11:09:52.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.858 Nvme0n1 : 1.00 22931.00 89.57 0.00 0.00 0.00 0.00 0.00 00:29:31.858 [2024-10-15T11:09:52.177Z] =================================================================================================================== 00:29:31.858 [2024-10-15T11:09:52.177Z] Total : 22931.00 89.57 0.00 0.00 0.00 0.00 0.00 00:29:31.858 00:29:32.796 13:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:32.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.796 Nvme0n1 : 2.00 23192.00 90.59 0.00 0.00 0.00 0.00 0.00 00:29:32.796 [2024-10-15T11:09:53.115Z] =================================================================================================================== 00:29:32.796 [2024-10-15T11:09:53.115Z] Total : 23192.00 90.59 0.00 0.00 0.00 0.00 0.00 00:29:32.796 00:29:33.055 true 00:29:33.055 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:33.055 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:33.055 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:33.055 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:33.055 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1410370 00:29:33.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.993 Nvme0n1 : 3.00 23327.33 91.12 0.00 0.00 0.00 0.00 0.00 00:29:33.993 [2024-10-15T11:09:54.312Z] =================================================================================================================== 00:29:33.993 [2024-10-15T11:09:54.312Z] Total : 23327.33 91.12 0.00 0.00 0.00 0.00 0.00 00:29:33.993 00:29:34.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.931 Nvme0n1 : 4.00 23397.25 91.40 0.00 0.00 0.00 0.00 0.00 00:29:34.931 [2024-10-15T11:09:55.250Z] =================================================================================================================== 00:29:34.931 [2024-10-15T11:09:55.250Z] Total : 23397.25 91.40 0.00 0.00 0.00 0.00 0.00 00:29:34.931 00:29:35.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.868 Nvme0n1 : 5.00 23460.40 91.64 0.00 0.00 0.00 0.00 0.00 00:29:35.868 [2024-10-15T11:09:56.187Z] =================================================================================================================== 00:29:35.868 [2024-10-15T11:09:56.187Z] Total : 23460.40 91.64 0.00 0.00 0.00 0.00 0.00 00:29:35.868 00:29:36.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.806 Nvme0n1 : 6.00 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:29:36.806 [2024-10-15T11:09:57.125Z] =================================================================================================================== 00:29:36.806 [2024-10-15T11:09:57.125Z] Total : 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:29:36.806 00:29:38.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.183 Nvme0n1 : 7.00 23541.43 91.96 0.00 0.00 0.00 0.00 0.00 00:29:38.183 [2024-10-15T11:09:58.502Z] =================================================================================================================== 00:29:38.183 [2024-10-15T11:09:58.502Z] Total : 23541.43 91.96 0.00 0.00 0.00 0.00 0.00 00:29:38.183 00:29:39.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.121 Nvme0n1 : 8.00 23580.88 92.11 0.00 0.00 0.00 0.00 0.00 00:29:39.121 [2024-10-15T11:09:59.440Z] =================================================================================================================== 00:29:39.121 [2024-10-15T11:09:59.440Z] Total : 23580.88 92.11 0.00 0.00 0.00 0.00 0.00 00:29:39.121 00:29:40.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.057 Nvme0n1 : 9.00 23597.22 92.18 0.00 0.00 0.00 0.00 0.00 00:29:40.057 [2024-10-15T11:10:00.376Z] =================================================================================================================== 00:29:40.057 [2024-10-15T11:10:00.376Z] Total : 23597.22 92.18 0.00 0.00 0.00 0.00 0.00 00:29:40.057 00:29:40.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.994 Nvme0n1 : 10.00 23580.40 92.11 0.00 0.00 0.00 0.00 0.00 00:29:40.994 [2024-10-15T11:10:01.313Z] =================================================================================================================== 00:29:40.994 [2024-10-15T11:10:01.313Z] Total : 23580.40 92.11 0.00 0.00 0.00 0.00 0.00 00:29:40.994 00:29:40.994 00:29:40.994 Latency(us) 00:29:40.994 [2024-10-15T11:10:01.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.994 Nvme0n1 : 10.00 23582.76 92.12 0.00 0.00 5424.52 3167.57 26713.72 00:29:40.994 [2024-10-15T11:10:01.313Z] =================================================================================================================== 00:29:40.994 [2024-10-15T11:10:01.313Z] Total : 23582.76 92.12 0.00 0.00 5424.52 3167.57 26713.72 00:29:40.994 { 00:29:40.994 "results": [ 00:29:40.994 { 00:29:40.994 "job": "Nvme0n1", 00:29:40.994 "core_mask": "0x2", 00:29:40.994 "workload": "randwrite", 00:29:40.994 "status": "finished", 00:29:40.994 "queue_depth": 128, 00:29:40.994 "io_size": 4096, 00:29:40.994 "runtime": 10.004428, 00:29:40.994 "iops": 23582.757554954667, 00:29:40.994 "mibps": 92.12014669904167, 00:29:40.994 "io_failed": 0, 00:29:40.994 "io_timeout": 0, 00:29:40.994 "avg_latency_us": 5424.521769969233, 00:29:40.994 "min_latency_us": 3167.5733333333333, 00:29:40.994 "max_latency_us": 26713.721904761904 00:29:40.994 } 00:29:40.994 ], 00:29:40.994 "core_count": 1 00:29:40.994 } 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1410139 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1410139 ']' 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1410139 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410139 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410139' 00:29:40.994 killing process with pid 1410139 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1410139 00:29:40.994 Received shutdown signal, test time was about 10.000000 seconds 00:29:40.994 00:29:40.994 Latency(us) 00:29:40.994 [2024-10-15T11:10:01.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.994 [2024-10-15T11:10:01.313Z] =================================================================================================================== 00:29:40.994 [2024-10-15T11:10:01.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.994 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1410139 00:29:41.252 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.252 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:41.511 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:41.511 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:41.769 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:41.769 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:41.769 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1407147 00:29:41.769 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1407147 00:29:41.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1407147 Killed "${NVMF_APP[@]}" "$@" 00:29:41.769 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1412001 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1412001 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1412001 ']' 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.770 13:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.770 [2024-10-15 13:10:02.022378] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.770 [2024-10-15 13:10:02.023279] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:41.770 [2024-10-15 13:10:02.023314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.029 [2024-10-15 13:10:02.096138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.029 [2024-10-15 13:10:02.136341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.029 [2024-10-15 13:10:02.136377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.029 [2024-10-15 13:10:02.136384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.029 [2024-10-15 13:10:02.136390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.029 [2024-10-15 13:10:02.136396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.029 [2024-10-15 13:10:02.136955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.029 [2024-10-15 13:10:02.202868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.029 [2024-10-15 13:10:02.203108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.029 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:42.289 [2024-10-15 13:10:02.450366] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:42.289 [2024-10-15 13:10:02.450565] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:42.289 [2024-10-15 13:10:02.450665] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:42.289 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:42.549 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bce8fe8e-aba3-4ed2-b732-94b33e300b52 -t 2000 00:29:42.549 [ 00:29:42.549 { 00:29:42.549 "name": "bce8fe8e-aba3-4ed2-b732-94b33e300b52", 00:29:42.549 "aliases": [ 00:29:42.549 "lvs/lvol" 00:29:42.549 ], 00:29:42.549 "product_name": "Logical Volume", 00:29:42.549 "block_size": 4096, 00:29:42.549 "num_blocks": 38912, 00:29:42.549 "uuid": "bce8fe8e-aba3-4ed2-b732-94b33e300b52", 00:29:42.549 "assigned_rate_limits": { 00:29:42.549 "rw_ios_per_sec": 0, 00:29:42.549 "rw_mbytes_per_sec": 0, 00:29:42.549 "r_mbytes_per_sec": 0, 00:29:42.549 "w_mbytes_per_sec": 0 00:29:42.549 }, 00:29:42.549 "claimed": false, 00:29:42.549 "zoned": false, 00:29:42.549 "supported_io_types": { 00:29:42.549 "read": true, 00:29:42.549 "write": true, 00:29:42.549 "unmap": true, 00:29:42.549 "flush": false, 00:29:42.549 "reset": true, 00:29:42.549 "nvme_admin": false, 00:29:42.549 "nvme_io": false, 00:29:42.549 "nvme_io_md": false, 00:29:42.549 "write_zeroes": true, 00:29:42.549 "zcopy": false, 00:29:42.549 "get_zone_info": false, 00:29:42.549 "zone_management": false, 00:29:42.549 "zone_append": false, 00:29:42.549 "compare": false, 00:29:42.549 "compare_and_write": false, 00:29:42.549 "abort": false, 00:29:42.549 "seek_hole": true, 00:29:42.549 "seek_data": true, 00:29:42.549 "copy": false, 00:29:42.549 "nvme_iov_md": false 00:29:42.549 }, 00:29:42.549 "driver_specific": { 00:29:42.549 "lvol": { 00:29:42.549 "lvol_store_uuid": "1c76a40e-0aaf-465c-a7e9-8fca78978a7d", 00:29:42.549 "base_bdev": "aio_bdev", 00:29:42.549 "thin_provision": false, 00:29:42.549 "num_allocated_clusters": 38, 00:29:42.549 "snapshot": false, 00:29:42.549 "clone": false, 00:29:42.549 "esnap_clone": false 00:29:42.549 } 00:29:42.549 } 00:29:42.549 } 00:29:42.549 ] 00:29:42.549 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:42.808 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:42.808 13:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:42.808 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:42.808 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:42.808 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:43.068 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:43.068 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:43.327 [2024-10-15 13:10:03.433413] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.327 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.328 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.328 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:43.328 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:43.586 request: 00:29:43.586 { 00:29:43.586 "uuid": "1c76a40e-0aaf-465c-a7e9-8fca78978a7d", 00:29:43.586 "method": "bdev_lvol_get_lvstores", 00:29:43.586 "req_id": 1 00:29:43.586 } 00:29:43.586 Got JSON-RPC error response 00:29:43.586 response: 00:29:43.586 { 00:29:43.586 "code": -19, 00:29:43.586 "message": "No such device" 00:29:43.586 } 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:43.586 aio_bdev 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:43.586 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:43.587 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:43.587 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:43.587 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:43.587 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:43.587 13:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:43.846 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bce8fe8e-aba3-4ed2-b732-94b33e300b52 -t 2000 00:29:44.105 [ 00:29:44.105 { 00:29:44.105 "name": "bce8fe8e-aba3-4ed2-b732-94b33e300b52", 00:29:44.105 "aliases": [ 00:29:44.105 "lvs/lvol" 00:29:44.105 ], 00:29:44.105 "product_name": "Logical Volume", 00:29:44.105 "block_size": 4096, 00:29:44.105 "num_blocks": 38912, 00:29:44.105 "uuid": "bce8fe8e-aba3-4ed2-b732-94b33e300b52", 00:29:44.105 "assigned_rate_limits": { 00:29:44.105 "rw_ios_per_sec": 0, 00:29:44.105 "rw_mbytes_per_sec": 0, 00:29:44.105 "r_mbytes_per_sec": 0, 00:29:44.105 "w_mbytes_per_sec": 0 00:29:44.105 }, 00:29:44.105 "claimed": false, 00:29:44.105 "zoned": false, 00:29:44.105 "supported_io_types": { 00:29:44.105 "read": true, 00:29:44.105 "write": true, 00:29:44.105 "unmap": true, 00:29:44.105 "flush": false, 00:29:44.105 "reset": true, 00:29:44.105 "nvme_admin": false, 00:29:44.105 "nvme_io": false, 00:29:44.105 "nvme_io_md": false, 00:29:44.105 "write_zeroes": true, 00:29:44.105 "zcopy": false, 00:29:44.105 "get_zone_info": false, 00:29:44.105 "zone_management": false, 00:29:44.105 "zone_append": false, 00:29:44.105 "compare": false, 00:29:44.105 "compare_and_write": false, 00:29:44.105 "abort": false, 00:29:44.105 "seek_hole": true, 00:29:44.105 "seek_data": true, 00:29:44.105 "copy": false, 00:29:44.105 "nvme_iov_md": false 00:29:44.105 }, 00:29:44.105 "driver_specific": { 00:29:44.105 "lvol": { 00:29:44.105 "lvol_store_uuid": "1c76a40e-0aaf-465c-a7e9-8fca78978a7d", 00:29:44.105 "base_bdev": "aio_bdev", 00:29:44.105 "thin_provision": false, 00:29:44.105 "num_allocated_clusters": 38, 00:29:44.105 "snapshot": false, 00:29:44.105 "clone": false, 00:29:44.105 "esnap_clone": false 00:29:44.105 } 00:29:44.105 } 00:29:44.105 } 00:29:44.105 ] 00:29:44.105 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:44.105 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:44.105 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:44.365 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:44.365 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:44.365 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:44.365 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:44.365 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bce8fe8e-aba3-4ed2-b732-94b33e300b52 00:29:44.624 13:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c76a40e-0aaf-465c-a7e9-8fca78978a7d 00:29:44.884 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:45.143 00:29:45.143 real 0m17.059s 00:29:45.143 user 0m34.460s 00:29:45.143 sys 0m3.778s 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:45.143 ************************************ 00:29:45.143 END TEST lvs_grow_dirty 00:29:45.143 ************************************ 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:45.143 nvmf_trace.0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.143 rmmod nvme_tcp 00:29:45.143 rmmod nvme_fabrics 00:29:45.143 rmmod nvme_keyring 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1412001 ']' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1412001 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1412001 ']' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1412001 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.143 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412001 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412001' 00:29:45.403 killing process with pid 1412001 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1412001 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1412001 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.403 13:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.938 00:29:47.938 real 0m41.943s 00:29:47.938 user 0m52.070s 00:29:47.938 sys 0m10.245s 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:47.938 ************************************ 00:29:47.938 END TEST nvmf_lvs_grow 00:29:47.938 ************************************ 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.938 ************************************ 00:29:47.938 START TEST nvmf_bdev_io_wait 00:29:47.938 ************************************ 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:47.938 * Looking for test storage... 00:29:47.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.938 --rc genhtml_branch_coverage=1 00:29:47.938 --rc genhtml_function_coverage=1 00:29:47.938 --rc genhtml_legend=1 00:29:47.938 --rc geninfo_all_blocks=1 00:29:47.938 --rc geninfo_unexecuted_blocks=1 00:29:47.938 00:29:47.938 ' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.938 --rc genhtml_branch_coverage=1 00:29:47.938 --rc genhtml_function_coverage=1 00:29:47.938 --rc genhtml_legend=1 00:29:47.938 --rc geninfo_all_blocks=1 00:29:47.938 --rc geninfo_unexecuted_blocks=1 00:29:47.938 00:29:47.938 ' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.938 --rc genhtml_branch_coverage=1 00:29:47.938 --rc genhtml_function_coverage=1 00:29:47.938 --rc genhtml_legend=1 00:29:47.938 --rc geninfo_all_blocks=1 00:29:47.938 --rc geninfo_unexecuted_blocks=1 00:29:47.938 00:29:47.938 ' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.938 --rc genhtml_branch_coverage=1 00:29:47.938 --rc genhtml_function_coverage=1 00:29:47.938 --rc genhtml_legend=1 00:29:47.938 --rc geninfo_all_blocks=1 00:29:47.938 --rc geninfo_unexecuted_blocks=1 00:29:47.938 00:29:47.938 ' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.938 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.939 13:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.522 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:54.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:54.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:54.523 Found net devices under 0000:86:00.0: cvl_0_0 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:54.523 Found net devices under 0000:86:00.1: cvl_0_1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.523 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:29:54.524 00:29:54.524 --- 10.0.0.2 ping statistics --- 00:29:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.524 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:29:54.524 00:29:54.524 --- 10.0.0.1 ping statistics --- 00:29:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.524 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1416172 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1416172 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1416172 ']' 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.524 13:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 [2024-10-15 13:10:13.953267] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.524 [2024-10-15 13:10:13.954235] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:54.524 [2024-10-15 13:10:13.954273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.524 [2024-10-15 13:10:14.026987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.524 [2024-10-15 13:10:14.071080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.524 [2024-10-15 13:10:14.071115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.524 [2024-10-15 13:10:14.071123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.524 [2024-10-15 13:10:14.071129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.524 [2024-10-15 13:10:14.071134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.524 [2024-10-15 13:10:14.072597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.524 [2024-10-15 13:10:14.072717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.524 [2024-10-15 13:10:14.072747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.524 [2024-10-15 13:10:14.072748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.524 [2024-10-15 13:10:14.073165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 [2024-10-15 13:10:14.217861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.524 [2024-10-15 13:10:14.218581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:54.524 [2024-10-15 13:10:14.218794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:54.524 [2024-10-15 13:10:14.218915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 [2024-10-15 13:10:14.229581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 Malloc0 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.524 [2024-10-15 13:10:14.301771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1416271 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1416273 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:54.524 { 00:29:54.524 "params": { 00:29:54.524 "name": "Nvme$subsystem", 00:29:54.524 "trtype": "$TEST_TRANSPORT", 00:29:54.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.524 "adrfam": "ipv4", 00:29:54.524 "trsvcid": "$NVMF_PORT", 00:29:54.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.524 "hdgst": ${hdgst:-false}, 00:29:54.524 "ddgst": ${ddgst:-false} 00:29:54.524 }, 00:29:54.524 "method": "bdev_nvme_attach_controller" 00:29:54.524 } 00:29:54.524 EOF 00:29:54.524 )") 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1416275 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:54.524 { 00:29:54.524 "params": { 00:29:54.524 "name": "Nvme$subsystem", 00:29:54.524 "trtype": "$TEST_TRANSPORT", 00:29:54.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.524 "adrfam": "ipv4", 00:29:54.524 "trsvcid": "$NVMF_PORT", 00:29:54.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.524 "hdgst": ${hdgst:-false}, 00:29:54.524 "ddgst": ${ddgst:-false} 00:29:54.524 }, 00:29:54.524 "method": "bdev_nvme_attach_controller" 00:29:54.524 } 00:29:54.524 EOF 00:29:54.524 )") 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1416278 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:54.524 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:54.524 { 00:29:54.524 "params": { 00:29:54.524 "name": "Nvme$subsystem", 00:29:54.524 "trtype": "$TEST_TRANSPORT", 00:29:54.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.524 "adrfam": "ipv4", 00:29:54.524 "trsvcid": "$NVMF_PORT", 00:29:54.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.525 "hdgst": ${hdgst:-false}, 00:29:54.525 "ddgst": ${ddgst:-false} 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 } 00:29:54.525 EOF 00:29:54.525 )") 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:54.525 { 00:29:54.525 "params": { 00:29:54.525 "name": "Nvme$subsystem", 00:29:54.525 "trtype": "$TEST_TRANSPORT", 00:29:54.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.525 "adrfam": "ipv4", 00:29:54.525 "trsvcid": "$NVMF_PORT", 00:29:54.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.525 "hdgst": ${hdgst:-false}, 00:29:54.525 "ddgst": ${ddgst:-false} 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 } 00:29:54.525 EOF 00:29:54.525 )") 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1416271 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:54.525 "params": { 00:29:54.525 "name": "Nvme1", 00:29:54.525 "trtype": "tcp", 00:29:54.525 "traddr": "10.0.0.2", 00:29:54.525 "adrfam": "ipv4", 00:29:54.525 "trsvcid": "4420", 00:29:54.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.525 "hdgst": false, 00:29:54.525 "ddgst": false 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 }' 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:54.525 "params": { 00:29:54.525 "name": "Nvme1", 00:29:54.525 "trtype": "tcp", 00:29:54.525 "traddr": "10.0.0.2", 00:29:54.525 "adrfam": "ipv4", 00:29:54.525 "trsvcid": "4420", 00:29:54.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.525 "hdgst": false, 00:29:54.525 "ddgst": false 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 }' 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:54.525 "params": { 00:29:54.525 "name": "Nvme1", 00:29:54.525 "trtype": "tcp", 00:29:54.525 "traddr": "10.0.0.2", 00:29:54.525 "adrfam": "ipv4", 00:29:54.525 "trsvcid": "4420", 00:29:54.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.525 "hdgst": false, 00:29:54.525 "ddgst": false 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 }' 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:54.525 13:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:54.525 "params": { 00:29:54.525 "name": "Nvme1", 00:29:54.525 "trtype": "tcp", 00:29:54.525 "traddr": "10.0.0.2", 00:29:54.525 "adrfam": "ipv4", 00:29:54.525 "trsvcid": "4420", 00:29:54.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.525 "hdgst": false, 00:29:54.525 "ddgst": false 00:29:54.525 }, 00:29:54.525 "method": "bdev_nvme_attach_controller" 00:29:54.525 }' 00:29:54.525 [2024-10-15 13:10:14.355770] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:54.525 [2024-10-15 13:10:14.355823] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:54.525 [2024-10-15 13:10:14.356403] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:54.525 [2024-10-15 13:10:14.356444] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:54.525 [2024-10-15 13:10:14.357043] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:54.525 [2024-10-15 13:10:14.357083] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:54.525 [2024-10-15 13:10:14.360525] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:29:54.525 [2024-10-15 13:10:14.360573] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:54.525 [2024-10-15 13:10:14.538391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.525 [2024-10-15 13:10:14.580626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:54.525 [2024-10-15 13:10:14.630548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.525 [2024-10-15 13:10:14.673094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:54.525 [2024-10-15 13:10:14.722541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.525 [2024-10-15 13:10:14.765825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.525 [2024-10-15 13:10:14.773676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:54.525 [2024-10-15 13:10:14.808429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:54.784 Running I/O for 1 seconds... 00:29:54.784 Running I/O for 1 seconds... 00:29:54.784 Running I/O for 1 seconds... 00:29:54.784 Running I/O for 1 seconds... 00:29:55.720 7825.00 IOPS, 30.57 MiB/s 00:29:55.720 Latency(us) 00:29:55.720 [2024-10-15T11:10:16.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.720 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:55.720 Nvme1n1 : 1.02 7844.48 30.64 0.00 0.00 16241.60 1474.56 23468.13 00:29:55.720 [2024-10-15T11:10:16.039Z] =================================================================================================================== 00:29:55.720 [2024-10-15T11:10:16.039Z] Total : 7844.48 30.64 0.00 0.00 16241.60 1474.56 23468.13 00:29:55.720 11827.00 IOPS, 46.20 MiB/s 00:29:55.720 Latency(us) 00:29:55.720 [2024-10-15T11:10:16.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.720 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:55.720 Nvme1n1 : 1.01 11889.82 46.44 0.00 0.00 10731.90 1856.85 14730.00 00:29:55.720 [2024-10-15T11:10:16.039Z] =================================================================================================================== 00:29:55.720 [2024-10-15T11:10:16.039Z] Total : 11889.82 46.44 0.00 0.00 10731.90 1856.85 14730.00 00:29:55.720 7806.00 IOPS, 30.49 MiB/s 00:29:55.720 Latency(us) 00:29:55.720 [2024-10-15T11:10:16.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.721 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:55.721 Nvme1n1 : 1.00 7899.92 30.86 0.00 0.00 16166.63 3386.03 32206.26 00:29:55.721 [2024-10-15T11:10:16.040Z] =================================================================================================================== 00:29:55.721 [2024-10-15T11:10:16.040Z] Total : 7899.92 30.86 0.00 0.00 16166.63 3386.03 32206.26 00:29:55.721 13:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1416273 00:29:55.980 253632.00 IOPS, 990.75 MiB/s 00:29:55.980 Latency(us) 00:29:55.980 [2024-10-15T11:10:16.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.980 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:55.980 Nvme1n1 : 1.00 253243.67 989.23 0.00 0.00 503.31 224.30 1505.77 00:29:55.980 [2024-10-15T11:10:16.299Z] =================================================================================================================== 00:29:55.980 [2024-10-15T11:10:16.299Z] Total : 253243.67 989.23 0.00 0.00 503.31 224.30 1505.77 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1416275 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1416278 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.980 rmmod nvme_tcp 00:29:55.980 rmmod nvme_fabrics 00:29:55.980 rmmod nvme_keyring 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1416172 ']' 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1416172 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1416172 ']' 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1416172 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:55.980 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416172 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416172' 00:29:56.239 killing process with pid 1416172 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1416172 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1416172 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.239 13:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.776 00:29:58.776 real 0m10.755s 00:29:58.776 user 0m14.829s 00:29:58.776 sys 0m6.479s 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.776 ************************************ 00:29:58.776 END TEST nvmf_bdev_io_wait 00:29:58.776 ************************************ 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:58.776 ************************************ 00:29:58.776 START TEST nvmf_queue_depth 00:29:58.776 ************************************ 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:58.776 * Looking for test storage... 00:29:58.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:58.776 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.777 --rc genhtml_branch_coverage=1 00:29:58.777 --rc genhtml_function_coverage=1 00:29:58.777 --rc genhtml_legend=1 00:29:58.777 --rc geninfo_all_blocks=1 00:29:58.777 --rc geninfo_unexecuted_blocks=1 00:29:58.777 00:29:58.777 ' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.777 --rc genhtml_branch_coverage=1 00:29:58.777 --rc genhtml_function_coverage=1 00:29:58.777 --rc genhtml_legend=1 00:29:58.777 --rc geninfo_all_blocks=1 00:29:58.777 --rc geninfo_unexecuted_blocks=1 00:29:58.777 00:29:58.777 ' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.777 --rc genhtml_branch_coverage=1 00:29:58.777 --rc genhtml_function_coverage=1 00:29:58.777 --rc genhtml_legend=1 00:29:58.777 --rc geninfo_all_blocks=1 00:29:58.777 --rc geninfo_unexecuted_blocks=1 00:29:58.777 00:29:58.777 ' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.777 --rc genhtml_branch_coverage=1 00:29:58.777 --rc genhtml_function_coverage=1 00:29:58.777 --rc genhtml_legend=1 00:29:58.777 --rc geninfo_all_blocks=1 00:29:58.777 --rc geninfo_unexecuted_blocks=1 00:29:58.777 00:29:58.777 ' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:58.777 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.778 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:05.351 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:05.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:05.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:05.352 Found net devices under 0000:86:00.0: cvl_0_0 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:05.352 Found net devices under 0000:86:00.1: cvl_0_1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.352 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:30:05.352 00:30:05.352 --- 10.0.0.2 ping statistics --- 00:30:05.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.353 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:30:05.353 00:30:05.353 --- 10.0.0.1 ping statistics --- 00:30:05.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.353 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1420053 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1420053 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1420053 ']' 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.353 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.353 [2024-10-15 13:10:24.755376] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.353 [2024-10-15 13:10:24.756295] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:30:05.353 [2024-10-15 13:10:24.756328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.353 [2024-10-15 13:10:24.832318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.353 [2024-10-15 13:10:24.875949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.353 [2024-10-15 13:10:24.875986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.353 [2024-10-15 13:10:24.875993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.353 [2024-10-15 13:10:24.875999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.353 [2024-10-15 13:10:24.876004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.353 [2024-10-15 13:10:24.876540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.353 [2024-10-15 13:10:24.941824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.353 [2024-10-15 13:10:24.942037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.353 [2024-10-15 13:10:25.633210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.353 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.613 Malloc0 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.613 [2024-10-15 13:10:25.709240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1420292 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1420292 /var/tmp/bdevperf.sock 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1420292 ']' 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.613 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.613 [2024-10-15 13:10:25.760834] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:30:05.613 [2024-10-15 13:10:25.760894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420292 ] 00:30:05.613 [2024-10-15 13:10:25.827320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.613 [2024-10-15 13:10:25.869462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.872 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:05.872 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:30:05.872 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.872 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.872 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.872 NVMe0n1 00:30:05.872 13:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.872 13:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.872 Running I/O for 10 seconds... 00:30:07.915 12232.00 IOPS, 47.78 MiB/s [2024-10-15T11:10:29.610Z] 12294.00 IOPS, 48.02 MiB/s [2024-10-15T11:10:30.547Z] 12365.33 IOPS, 48.30 MiB/s [2024-10-15T11:10:31.484Z] 12467.25 IOPS, 48.70 MiB/s [2024-10-15T11:10:32.422Z] 12502.80 IOPS, 48.84 MiB/s [2024-10-15T11:10:33.359Z] 12568.83 IOPS, 49.10 MiB/s [2024-10-15T11:10:34.295Z] 12599.71 IOPS, 49.22 MiB/s [2024-10-15T11:10:35.232Z] 12675.50 IOPS, 49.51 MiB/s [2024-10-15T11:10:36.609Z] 12707.33 IOPS, 49.64 MiB/s [2024-10-15T11:10:36.609Z] 12722.90 IOPS, 49.70 MiB/s 00:30:16.290 Latency(us) 00:30:16.290 [2024-10-15T11:10:36.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.290 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:16.290 Verification LBA range: start 0x0 length 0x4000 00:30:16.290 NVMe0n1 : 10.05 12755.34 49.83 0.00 0.00 80020.81 13169.62 50930.83 00:30:16.290 [2024-10-15T11:10:36.609Z] =================================================================================================================== 00:30:16.290 [2024-10-15T11:10:36.609Z] Total : 12755.34 49.83 0.00 0.00 80020.81 13169.62 50930.83 00:30:16.290 { 00:30:16.290 "results": [ 00:30:16.290 { 00:30:16.290 "job": "NVMe0n1", 00:30:16.290 "core_mask": "0x1", 00:30:16.290 "workload": "verify", 00:30:16.290 "status": "finished", 00:30:16.290 "verify_range": { 00:30:16.290 "start": 0, 00:30:16.290 "length": 16384 00:30:16.290 }, 00:30:16.290 "queue_depth": 1024, 00:30:16.290 "io_size": 4096, 00:30:16.290 "runtime": 10.052731, 00:30:16.290 "iops": 12755.339817607772, 00:30:16.290 "mibps": 49.82554616253036, 00:30:16.290 "io_failed": 0, 00:30:16.290 "io_timeout": 0, 00:30:16.290 "avg_latency_us": 80020.8066118973, 00:30:16.290 "min_latency_us": 13169.615238095239, 00:30:16.290 "max_latency_us": 50930.834285714285 00:30:16.290 } 00:30:16.290 ], 00:30:16.290 "core_count": 1 00:30:16.290 } 00:30:16.290 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1420292 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1420292 ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1420292 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420292 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420292' 00:30:16.291 killing process with pid 1420292 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1420292 00:30:16.291 Received shutdown signal, test time was about 10.000000 seconds 00:30:16.291 00:30:16.291 Latency(us) 00:30:16.291 [2024-10-15T11:10:36.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.291 [2024-10-15T11:10:36.610Z] =================================================================================================================== 00:30:16.291 [2024-10-15T11:10:36.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1420292 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.291 rmmod nvme_tcp 00:30:16.291 rmmod nvme_fabrics 00:30:16.291 rmmod nvme_keyring 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1420053 ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1420053 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1420053 ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1420053 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.291 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420053 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420053' 00:30:16.550 killing process with pid 1420053 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1420053 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1420053 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.550 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.087 00:30:19.087 real 0m20.272s 00:30:19.087 user 0m22.766s 00:30:19.087 sys 0m6.369s 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:19.087 ************************************ 00:30:19.087 END TEST nvmf_queue_depth 00:30:19.087 ************************************ 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.087 ************************************ 00:30:19.087 START TEST nvmf_target_multipath 00:30:19.087 ************************************ 00:30:19.087 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:19.087 * Looking for test storage... 00:30:19.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:19.087 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:19.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.088 --rc genhtml_branch_coverage=1 00:30:19.088 --rc genhtml_function_coverage=1 00:30:19.088 --rc genhtml_legend=1 00:30:19.088 --rc geninfo_all_blocks=1 00:30:19.088 --rc geninfo_unexecuted_blocks=1 00:30:19.088 00:30:19.088 ' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:19.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.088 --rc genhtml_branch_coverage=1 00:30:19.088 --rc genhtml_function_coverage=1 00:30:19.088 --rc genhtml_legend=1 00:30:19.088 --rc geninfo_all_blocks=1 00:30:19.088 --rc geninfo_unexecuted_blocks=1 00:30:19.088 00:30:19.088 ' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:19.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.088 --rc genhtml_branch_coverage=1 00:30:19.088 --rc genhtml_function_coverage=1 00:30:19.088 --rc genhtml_legend=1 00:30:19.088 --rc geninfo_all_blocks=1 00:30:19.088 --rc geninfo_unexecuted_blocks=1 00:30:19.088 00:30:19.088 ' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:19.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.088 --rc genhtml_branch_coverage=1 00:30:19.088 --rc genhtml_function_coverage=1 00:30:19.088 --rc genhtml_legend=1 00:30:19.088 --rc geninfo_all_blocks=1 00:30:19.088 --rc geninfo_unexecuted_blocks=1 00:30:19.088 00:30:19.088 ' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.088 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.089 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:25.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.659 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:25.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:25.660 Found net devices under 0000:86:00.0: cvl_0_0 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:25.660 Found net devices under 0000:86:00.1: cvl_0_1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.660 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:30:25.660 00:30:25.660 --- 10.0.0.2 ping statistics --- 00:30:25.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.660 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:30:25.660 00:30:25.660 --- 10.0.0.1 ping statistics --- 00:30:25.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.660 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:25.660 only one NIC for nvmf test 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.660 rmmod nvme_tcp 00:30:25.660 rmmod nvme_fabrics 00:30:25.660 rmmod nvme_keyring 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.660 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.661 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.661 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.047 00:30:27.047 real 0m8.328s 00:30:27.047 user 0m1.827s 00:30:27.047 sys 0m4.517s 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:27.047 ************************************ 00:30:27.047 END TEST nvmf_target_multipath 00:30:27.047 ************************************ 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:27.047 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.047 ************************************ 00:30:27.047 START TEST nvmf_zcopy 00:30:27.047 ************************************ 00:30:27.048 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:27.308 * Looking for test storage... 00:30:27.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.308 --rc genhtml_branch_coverage=1 00:30:27.308 --rc genhtml_function_coverage=1 00:30:27.308 --rc genhtml_legend=1 00:30:27.308 --rc geninfo_all_blocks=1 00:30:27.308 --rc geninfo_unexecuted_blocks=1 00:30:27.308 00:30:27.308 ' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.308 --rc genhtml_branch_coverage=1 00:30:27.308 --rc genhtml_function_coverage=1 00:30:27.308 --rc genhtml_legend=1 00:30:27.308 --rc geninfo_all_blocks=1 00:30:27.308 --rc geninfo_unexecuted_blocks=1 00:30:27.308 00:30:27.308 ' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.308 --rc genhtml_branch_coverage=1 00:30:27.308 --rc genhtml_function_coverage=1 00:30:27.308 --rc genhtml_legend=1 00:30:27.308 --rc geninfo_all_blocks=1 00:30:27.308 --rc geninfo_unexecuted_blocks=1 00:30:27.308 00:30:27.308 ' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.308 --rc genhtml_branch_coverage=1 00:30:27.308 --rc genhtml_function_coverage=1 00:30:27.308 --rc genhtml_legend=1 00:30:27.308 --rc geninfo_all_blocks=1 00:30:27.308 --rc geninfo_unexecuted_blocks=1 00:30:27.308 00:30:27.308 ' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.308 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.309 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:33.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:33.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:33.880 Found net devices under 0000:86:00.0: cvl_0_0 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:33.880 Found net devices under 0000:86:00.1: cvl_0_1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:30:33.880 00:30:33.880 --- 10.0.0.2 ping statistics --- 00:30:33.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.880 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:30:33.880 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:30:33.881 00:30:33.881 --- 10.0.0.1 ping statistics --- 00:30:33.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.881 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1428952 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1428952 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1428952 ']' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 [2024-10-15 13:10:53.553857] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.881 [2024-10-15 13:10:53.554747] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:30:33.881 [2024-10-15 13:10:53.554778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.881 [2024-10-15 13:10:53.627493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.881 [2024-10-15 13:10:53.668861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.881 [2024-10-15 13:10:53.668893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.881 [2024-10-15 13:10:53.668903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.881 [2024-10-15 13:10:53.668910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.881 [2024-10-15 13:10:53.668915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.881 [2024-10-15 13:10:53.669466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.881 [2024-10-15 13:10:53.736041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.881 [2024-10-15 13:10:53.736255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 [2024-10-15 13:10:53.810088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 [2024-10-15 13:10:53.830269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 malloc0 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:33.881 { 00:30:33.881 "params": { 00:30:33.881 "name": "Nvme$subsystem", 00:30:33.881 "trtype": "$TEST_TRANSPORT", 00:30:33.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.881 "adrfam": "ipv4", 00:30:33.881 "trsvcid": "$NVMF_PORT", 00:30:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.881 "hdgst": ${hdgst:-false}, 00:30:33.881 "ddgst": ${ddgst:-false} 00:30:33.881 }, 00:30:33.881 "method": "bdev_nvme_attach_controller" 00:30:33.881 } 00:30:33.881 EOF 00:30:33.881 )") 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:30:33.881 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:33.881 "params": { 00:30:33.881 "name": "Nvme1", 00:30:33.881 "trtype": "tcp", 00:30:33.881 "traddr": "10.0.0.2", 00:30:33.881 "adrfam": "ipv4", 00:30:33.881 "trsvcid": "4420", 00:30:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.881 "hdgst": false, 00:30:33.881 "ddgst": false 00:30:33.881 }, 00:30:33.881 "method": "bdev_nvme_attach_controller" 00:30:33.881 }' 00:30:33.881 [2024-10-15 13:10:53.913411] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:30:33.881 [2024-10-15 13:10:53.913451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428973 ] 00:30:33.881 [2024-10-15 13:10:53.981632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.881 [2024-10-15 13:10:54.025371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.140 Running I/O for 10 seconds... 00:30:36.012 8360.00 IOPS, 65.31 MiB/s [2024-10-15T11:10:57.707Z] 8412.50 IOPS, 65.72 MiB/s [2024-10-15T11:10:58.643Z] 8434.00 IOPS, 65.89 MiB/s [2024-10-15T11:10:59.579Z] 8445.00 IOPS, 65.98 MiB/s [2024-10-15T11:11:00.516Z] 8458.80 IOPS, 66.08 MiB/s [2024-10-15T11:11:01.451Z] 8455.17 IOPS, 66.06 MiB/s [2024-10-15T11:11:02.387Z] 8448.86 IOPS, 66.01 MiB/s [2024-10-15T11:11:03.324Z] 8452.62 IOPS, 66.04 MiB/s [2024-10-15T11:11:04.701Z] 8453.89 IOPS, 66.05 MiB/s [2024-10-15T11:11:04.701Z] 8462.20 IOPS, 66.11 MiB/s 00:30:44.382 Latency(us) 00:30:44.382 [2024-10-15T11:11:04.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.382 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:44.382 Verification LBA range: start 0x0 length 0x1000 00:30:44.382 Nvme1n1 : 10.05 8430.69 65.86 0.00 0.00 15083.29 2715.06 43940.33 00:30:44.382 [2024-10-15T11:11:04.701Z] =================================================================================================================== 00:30:44.382 [2024-10-15T11:11:04.701Z] Total : 8430.69 65.86 0.00 0.00 15083.29 2715.06 43940.33 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1430590 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:44.382 { 00:30:44.382 "params": { 00:30:44.382 "name": "Nvme$subsystem", 00:30:44.382 "trtype": "$TEST_TRANSPORT", 00:30:44.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.382 "adrfam": "ipv4", 00:30:44.382 "trsvcid": "$NVMF_PORT", 00:30:44.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.382 "hdgst": ${hdgst:-false}, 00:30:44.382 "ddgst": ${ddgst:-false} 00:30:44.382 }, 00:30:44.382 "method": "bdev_nvme_attach_controller" 00:30:44.382 } 00:30:44.382 EOF 00:30:44.382 )") 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:30:44.382 [2024-10-15 13:11:04.541810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.541845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:30:44.382 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:44.382 "params": { 00:30:44.382 "name": "Nvme1", 00:30:44.382 "trtype": "tcp", 00:30:44.382 "traddr": "10.0.0.2", 00:30:44.382 "adrfam": "ipv4", 00:30:44.382 "trsvcid": "4420", 00:30:44.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.382 "hdgst": false, 00:30:44.382 "ddgst": false 00:30:44.382 }, 00:30:44.382 "method": "bdev_nvme_attach_controller" 00:30:44.382 }' 00:30:44.382 [2024-10-15 13:11:04.553773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.553786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.565771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.565781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.577772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.577782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.582505] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:30:44.382 [2024-10-15 13:11:04.582545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430590 ] 00:30:44.382 [2024-10-15 13:11:04.589771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.589782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.601770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.601781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.613770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.613781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.625767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.625776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.382 [2024-10-15 13:11:04.637769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.382 [2024-10-15 13:11:04.637782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.383 [2024-10-15 13:11:04.649770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.383 [2024-10-15 13:11:04.649779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.383 [2024-10-15 13:11:04.651699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.383 [2024-10-15 13:11:04.661770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.383 [2024-10-15 13:11:04.661783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.383 [2024-10-15 13:11:04.673772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.383 [2024-10-15 13:11:04.673785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.383 [2024-10-15 13:11:04.685773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.383 [2024-10-15 13:11:04.685788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.383 [2024-10-15 13:11:04.693355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.383 [2024-10-15 13:11:04.697769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.383 [2024-10-15 13:11:04.697780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.709787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.709807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.721776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.721794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.733772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.733787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.745771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.745785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.757776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.757789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.769770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.769780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.781781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.781802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.793774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.793787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.805776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.805791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.817769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.817780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.829770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.829780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.841766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.841776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.853773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.853791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.865775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.865789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.877773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.877791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 Running I/O for 5 seconds... 00:30:44.642 [2024-10-15 13:11:04.894986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.895005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.906180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.906198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.917613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.917631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.931622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.931641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.946114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.946131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.642 [2024-10-15 13:11:04.957313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.642 [2024-10-15 13:11:04.957331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:04.971608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:04.971626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:04.986234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:04.986252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:04.997829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:04.997848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.011286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.011305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.025597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.025621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.037569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.037588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.048725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.048743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.063040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.063058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.077864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.077882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.091447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.091465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.105928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.105948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.121575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.121593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.133199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.133217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.146566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.146583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.161496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.161514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.174379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.174396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.184783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.184801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.199026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.199044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.901 [2024-10-15 13:11:05.213666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.901 [2024-10-15 13:11:05.213684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.225230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.225251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.239824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.239843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.254334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.254351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.265128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.265145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.279623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.279641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.294387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.294405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.309501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.309519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.322866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.322884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.338281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.338298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.353586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.353608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.366196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.366221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.381885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.381902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.392047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.392063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.406829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.406848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.422093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.422111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.435210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.435239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.449844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.449862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.460939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.460957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.159 [2024-10-15 13:11:05.474976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.159 [2024-10-15 13:11:05.474994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.489657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.489676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.501093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.501111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.515149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.515166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.526237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.526253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.538937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.538953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.553613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.553634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.565189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.565207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.579118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.579135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.593754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.593771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.605223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.605240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.619199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.619217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.633775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.633793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.644741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.644758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.658331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.658349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.673553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.673571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.684903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.684920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.699274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.699292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.714020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.714038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.725472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.725490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.418 [2024-10-15 13:11:05.737795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.418 [2024-10-15 13:11:05.737812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.748726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.748744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.763035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.763052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.777762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.777780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.789115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.789133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.803447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.803464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.817794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.817812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.829295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.829312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.843525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.843543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.857569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.857586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.869211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.869228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.883281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.883298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 16557.00 IOPS, 129.35 MiB/s [2024-10-15T11:11:05.996Z] [2024-10-15 13:11:05.897832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.897849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.909134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.909152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.922590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.922611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.937384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.937401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.948998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.949016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.963248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.963267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.977927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.977946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.677 [2024-10-15 13:11:05.989645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.677 [2024-10-15 13:11:05.989665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.936 [2024-10-15 13:11:06.003311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.936 [2024-10-15 13:11:06.003331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.936 [2024-10-15 13:11:06.018713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.936 [2024-10-15 13:11:06.018731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.936 [2024-10-15 13:11:06.033471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.936 [2024-10-15 13:11:06.033489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.936 [2024-10-15 13:11:06.046972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.046991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.062928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.062946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.077738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.077757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.089867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.089886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.103323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.103342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.118090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.118109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.133548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.133568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.145228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.145246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.159417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.159436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.174089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.174108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.189155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.189175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.203328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.203347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.217633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.217651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.228997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.229017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.243026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.243044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.937 [2024-10-15 13:11:06.257366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.937 [2024-10-15 13:11:06.257384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.269727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.269746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.283223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.283241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.298356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.298374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.314160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.314180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.326810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.326829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.336889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.336908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.351246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.351264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.365818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.365836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.376940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.376962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.391344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.391361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.405859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.405877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.417266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.417284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.195 [2024-10-15 13:11:06.431537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.195 [2024-10-15 13:11:06.431554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.446173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.446189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.461871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.461888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.472721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.472738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.486973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.486991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.497845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.497863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.196 [2024-10-15 13:11:06.511257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.196 [2024-10-15 13:11:06.511275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.525821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.525841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.536694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.536712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.550866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.550885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.565512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.565530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.576668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.576686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.590767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.590785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.605763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.605781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.617295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.617313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.629817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.629838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.642225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.642243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.658009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.658026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.673958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.673979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.689649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.689667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.700840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.700858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.715167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.715185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.729588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.729610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.743710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.743727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.758227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.758244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.454 [2024-10-15 13:11:06.770781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.454 [2024-10-15 13:11:06.770799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.785352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.785371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.796804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.796821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.811213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.811231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.825504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.825522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.839369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.839386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.853837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.853855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.864715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.864734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.879149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.879167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 16582.50 IOPS, 129.55 MiB/s [2024-10-15T11:11:07.032Z] [2024-10-15 13:11:06.893799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.893820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.905431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.905448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.919290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.919307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.933878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.933896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.946030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.946046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.957287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.957304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.970960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.970977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.986019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.986036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:06.997803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:06.997820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:07.010949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.713 [2024-10-15 13:11:07.010966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.713 [2024-10-15 13:11:07.025653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.714 [2024-10-15 13:11:07.025670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.036748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.036767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.051203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.051220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.065876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.065893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.077037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.077054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.090651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.090668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.106203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.106221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.117718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.117737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.131302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.131320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.146179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.146197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.157179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.157197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.171189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.171208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.185785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.185804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.196885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.196903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.210905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.210923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.225661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.225680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.237175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.237193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.250767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.250786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.265893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.265912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.276887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.276905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.973 [2024-10-15 13:11:07.290416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.973 [2024-10-15 13:11:07.290434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.305075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.305093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.316361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.316378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.331097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.331116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.345830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.345849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.356837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.356855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.370690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.370707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.385913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.385933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.397429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.397447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.410710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.410729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.425128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.425147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.438409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.438428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.449861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.449880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.463370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.463389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.477938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.477956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.493925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.493945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.504720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.504749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.519535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.519554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.533813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.533832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.232 [2024-10-15 13:11:07.545096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.232 [2024-10-15 13:11:07.545115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.558768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.558787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.573752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.573770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.584838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.584856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.599080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.599098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.613449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.613468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.626172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.626191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.639485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.639503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.653682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.653700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.664431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.664450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.678890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.678908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.693690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.693709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.704939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.704958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.718836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.718854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.733772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.733791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.491 [2024-10-15 13:11:07.744932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.491 [2024-10-15 13:11:07.744951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.492 [2024-10-15 13:11:07.758866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.492 [2024-10-15 13:11:07.758885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.492 [2024-10-15 13:11:07.774129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.492 [2024-10-15 13:11:07.774147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.492 [2024-10-15 13:11:07.786335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.492 [2024-10-15 13:11:07.786353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.492 [2024-10-15 13:11:07.801551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.492 [2024-10-15 13:11:07.801569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.492 [2024-10-15 13:11:07.812972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.492 [2024-10-15 13:11:07.812992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.827390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.827409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.841554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.841572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.852873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.852890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.867289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.867307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.882180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.882197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 16594.67 IOPS, 129.65 MiB/s [2024-10-15T11:11:08.069Z] [2024-10-15 13:11:07.897405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.897427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.910738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.910756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.925599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.925621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.937686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.937703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.950712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.950729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.965819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.965836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.978352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.978369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:07.993870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:07.993888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:08.005369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:08.005386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:08.019087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:08.019104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.750 [2024-10-15 13:11:08.033805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.750 [2024-10-15 13:11:08.033823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.751 [2024-10-15 13:11:08.045177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.751 [2024-10-15 13:11:08.045195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.751 [2024-10-15 13:11:08.059178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.751 [2024-10-15 13:11:08.059195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.073527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.073549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.085303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.085320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.098528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.098545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.113790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.113809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.124628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.124646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.139090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.139108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.149159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.149181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.162922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.162939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.178052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.178070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.189961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.189978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.201099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.201117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.214367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.214384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.225295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.225312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.239546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.239564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.254156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.254173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.269649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.269667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.280697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.280715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.295464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.295482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.309568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.309586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.010 [2024-10-15 13:11:08.321045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.010 [2024-10-15 13:11:08.321063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.269 [2024-10-15 13:11:08.335259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.269 [2024-10-15 13:11:08.335278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.269 [2024-10-15 13:11:08.349781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.269 [2024-10-15 13:11:08.349799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.269 [2024-10-15 13:11:08.361303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.361322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.374050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.374068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.386620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.386637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.401818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.401840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.413013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.413031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.427070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.427088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.441585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.441607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.453129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.453147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.467098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.467115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.481389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.481406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.494060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.494077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.505410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.505428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.519195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.519214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.534203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.534220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.545478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.545496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.558564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.558581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.573231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.573249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.270 [2024-10-15 13:11:08.586920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.270 [2024-10-15 13:11:08.586938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.601884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.601903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.613367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.613384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.626565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.626583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.641632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.641650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.653483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.653504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.665816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.665833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.678953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.678970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.693521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.693539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.706352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.706369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.718075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.718092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.731202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.731219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.745809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.745827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.757135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.757153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.770227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.770243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.783270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.783287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.797613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.797632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.808068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.808086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.822843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.822861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.529 [2024-10-15 13:11:08.837051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.529 [2024-10-15 13:11:08.837071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.788 [2024-10-15 13:11:08.852049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.788 [2024-10-15 13:11:08.852069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.788 [2024-10-15 13:11:08.866318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.788 [2024-10-15 13:11:08.866337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.788 [2024-10-15 13:11:08.877001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.788 [2024-10-15 13:11:08.877019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.788 [2024-10-15 13:11:08.890976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.788 [2024-10-15 13:11:08.890995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.788 16631.50 IOPS, 129.93 MiB/s [2024-10-15T11:11:09.107Z] [2024-10-15 13:11:08.904821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.904839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.918655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.918672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.934236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.934254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.946604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.946622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.961645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.961663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.975635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.975654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:08.990255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:08.990273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.005812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.005831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.017031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.017050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.031329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.031347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.045565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.045583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.056312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.056330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.070947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.070965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.085620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.085649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.789 [2024-10-15 13:11:09.099224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.789 [2024-10-15 13:11:09.099241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.113819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.113838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.125064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.125082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.138212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.138230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.149593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.149617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.162896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.162914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.177988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.178007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.189932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.189949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.203268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.203286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.217761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.217779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.228778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.228796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.242201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.242218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.257863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.257881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.269049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.269071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.282863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.282880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.297451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.297469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.309017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.309035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.323132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.323149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.048 [2024-10-15 13:11:09.338093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.048 [2024-10-15 13:11:09.338111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.049 [2024-10-15 13:11:09.349844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.049 [2024-10-15 13:11:09.349862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.049 [2024-10-15 13:11:09.363058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.049 [2024-10-15 13:11:09.363075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.377928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.377947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.388647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.388665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.402791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.402814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.417010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.417028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.431338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.431357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.445431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.445449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.458288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.458304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.473634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.473652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.484921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.484938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.498990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.499007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.514015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.514034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.524896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.524914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.539400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.539417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.554195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.554217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.566769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.566786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.581554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.581572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.594411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.594428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.610379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.610397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.308 [2024-10-15 13:11:09.625179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.308 [2024-10-15 13:11:09.625198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.638725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.638743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.653800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.653817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.664775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.664797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.679351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.679368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.694116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.694134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.705592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.705614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.719883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.719902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.733832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.733850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.745729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.745747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.758794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.758812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.774320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.774339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.785090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.785108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.798590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.798613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.813417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.813436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.824955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.824974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.839188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.839206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.853624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.853641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.864484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.864502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.568 [2024-10-15 13:11:09.878950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.568 [2024-10-15 13:11:09.878968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.894171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.894190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 16653.60 IOPS, 130.11 MiB/s [2024-10-15T11:11:10.147Z] [2024-10-15 13:11:09.906065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.906083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 00:30:49.828 Latency(us) 00:30:49.828 [2024-10-15T11:11:10.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.828 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:49.828 Nvme1n1 : 5.01 16654.90 130.12 0.00 0.00 7677.47 2075.31 12670.29 00:30:49.828 [2024-10-15T11:11:10.147Z] =================================================================================================================== 00:30:49.828 [2024-10-15T11:11:10.147Z] Total : 16654.90 130.12 0.00 0.00 7677.47 2075.31 12670.29 00:30:49.828 [2024-10-15 13:11:09.917852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.917869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.929772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.929786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.941782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.941803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.953777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.953796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.965780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.965796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.977776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.977794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:09.989774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:09.989788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.001772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.001787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.013796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.013815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.025772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.025782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.037773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.037786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.049776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.049789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 [2024-10-15 13:11:10.061766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.828 [2024-10-15 13:11:10.061776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1430590) - No such process 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1430590 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.828 delay0 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.828 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:50.087 [2024-10-15 13:11:10.196820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:56.663 Initializing NVMe Controllers 00:30:56.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:56.663 Initialization complete. Launching workers. 00:30:56.663 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 21611 00:30:56.663 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21765, failed to submit 108 00:30:56.663 success 21669, unsuccessful 96, failed 0 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.663 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.663 rmmod nvme_tcp 00:30:56.922 rmmod nvme_fabrics 00:30:56.922 rmmod nvme_keyring 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1428952 ']' 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1428952 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1428952 ']' 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1428952 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1428952 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1428952' 00:30:56.922 killing process with pid 1428952 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1428952 00:30:56.922 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1428952 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.181 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.087 00:30:59.087 real 0m31.968s 00:30:59.087 user 0m40.757s 00:30:59.087 sys 0m13.245s 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.087 ************************************ 00:30:59.087 END TEST nvmf_zcopy 00:30:59.087 ************************************ 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.087 ************************************ 00:30:59.087 START TEST nvmf_nmic 00:30:59.087 ************************************ 00:30:59.087 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.346 * Looking for test storage... 00:30:59.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.346 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.347 --rc genhtml_branch_coverage=1 00:30:59.347 --rc genhtml_function_coverage=1 00:30:59.347 --rc genhtml_legend=1 00:30:59.347 --rc geninfo_all_blocks=1 00:30:59.347 --rc geninfo_unexecuted_blocks=1 00:30:59.347 00:30:59.347 ' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.347 --rc genhtml_branch_coverage=1 00:30:59.347 --rc genhtml_function_coverage=1 00:30:59.347 --rc genhtml_legend=1 00:30:59.347 --rc geninfo_all_blocks=1 00:30:59.347 --rc geninfo_unexecuted_blocks=1 00:30:59.347 00:30:59.347 ' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.347 --rc genhtml_branch_coverage=1 00:30:59.347 --rc genhtml_function_coverage=1 00:30:59.347 --rc genhtml_legend=1 00:30:59.347 --rc geninfo_all_blocks=1 00:30:59.347 --rc geninfo_unexecuted_blocks=1 00:30:59.347 00:30:59.347 ' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:59.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.347 --rc genhtml_branch_coverage=1 00:30:59.347 --rc genhtml_function_coverage=1 00:30:59.347 --rc genhtml_legend=1 00:30:59.347 --rc geninfo_all_blocks=1 00:30:59.347 --rc geninfo_unexecuted_blocks=1 00:30:59.347 00:30:59.347 ' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.347 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.061 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.061 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.061 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:06.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:06.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:06.062 Found net devices under 0000:86:00.0: cvl_0_0 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:06.062 Found net devices under 0000:86:00.1: cvl_0_1 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.062 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:31:06.063 00:31:06.063 --- 10.0.0.2 ping statistics --- 00:31:06.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.063 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:06.063 00:31:06.063 --- 10.0.0.1 ping statistics --- 00:31:06.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.063 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1436151 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1436151 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1436151 ']' 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 [2024-10-15 13:11:25.550441] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:06.063 [2024-10-15 13:11:25.551420] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:31:06.063 [2024-10-15 13:11:25.551455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.063 [2024-10-15 13:11:25.625761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:06.063 [2024-10-15 13:11:25.669616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.063 [2024-10-15 13:11:25.669656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.063 [2024-10-15 13:11:25.669664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.063 [2024-10-15 13:11:25.669670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.063 [2024-10-15 13:11:25.669677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.063 [2024-10-15 13:11:25.671184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.063 [2024-10-15 13:11:25.671296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.063 [2024-10-15 13:11:25.671404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.063 [2024-10-15 13:11:25.671405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.063 [2024-10-15 13:11:25.740143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:06.063 [2024-10-15 13:11:25.740941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:06.063 [2024-10-15 13:11:25.741317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:06.063 [2024-10-15 13:11:25.741954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:06.063 [2024-10-15 13:11:25.741971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 [2024-10-15 13:11:25.807899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 Malloc0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 [2024-10-15 13:11:25.884246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:06.063 test case1: single bdev can't be used in multiple subsystems 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.063 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.064 [2024-10-15 13:11:25.907865] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:06.064 [2024-10-15 13:11:25.907884] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:06.064 [2024-10-15 13:11:25.907891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.064 request: 00:31:06.064 { 00:31:06.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:06.064 "namespace": { 00:31:06.064 "bdev_name": "Malloc0", 00:31:06.064 "no_auto_visible": false 00:31:06.064 }, 00:31:06.064 "method": "nvmf_subsystem_add_ns", 00:31:06.064 "req_id": 1 00:31:06.064 } 00:31:06.064 Got JSON-RPC error response 00:31:06.064 response: 00:31:06.064 { 00:31:06.064 "code": -32602, 00:31:06.064 "message": "Invalid parameters" 00:31:06.064 } 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:06.064 Adding namespace failed - expected result. 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:06.064 test case2: host connect to nvmf target in multiple paths 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.064 [2024-10-15 13:11:25.919948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.064 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:06.064 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:06.323 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:06.323 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:31:06.323 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:06.323 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:06.323 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:31:08.227 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:08.227 [global] 00:31:08.227 thread=1 00:31:08.227 invalidate=1 00:31:08.227 rw=write 00:31:08.227 time_based=1 00:31:08.227 runtime=1 00:31:08.227 ioengine=libaio 00:31:08.227 direct=1 00:31:08.227 bs=4096 00:31:08.227 iodepth=1 00:31:08.227 norandommap=0 00:31:08.227 numjobs=1 00:31:08.227 00:31:08.227 verify_dump=1 00:31:08.227 verify_backlog=512 00:31:08.227 verify_state_save=0 00:31:08.227 do_verify=1 00:31:08.227 verify=crc32c-intel 00:31:08.227 [job0] 00:31:08.227 filename=/dev/nvme0n1 00:31:08.227 Could not set queue depth (nvme0n1) 00:31:08.486 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.486 fio-3.35 00:31:08.486 Starting 1 thread 00:31:09.863 00:31:09.864 job0: (groupid=0, jobs=1): err= 0: pid=1436772: Tue Oct 15 13:11:29 2024 00:31:09.864 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:31:09.864 slat (nsec): min=9725, max=26685, avg=22763.41, stdev=3117.55 00:31:09.864 clat (usec): min=40913, max=41312, avg=40984.86, stdev=80.84 00:31:09.864 lat (usec): min=40936, max=41321, avg=41007.62, stdev=78.07 00:31:09.864 clat percentiles (usec): 00:31:09.864 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:09.864 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:09.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:09.864 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:09.864 | 99.99th=[41157] 00:31:09.864 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:31:09.864 slat (usec): min=10, max=25859, avg=62.46, stdev=1142.29 00:31:09.864 clat (usec): min=126, max=318, avg=136.42, stdev= 9.68 00:31:09.864 lat (usec): min=137, max=26045, avg=198.87, stdev=1144.57 00:31:09.864 clat percentiles (usec): 00:31:09.864 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 133], 00:31:09.864 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:09.864 | 70.00th=[ 139], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 145], 00:31:09.864 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 318], 99.95th=[ 318], 00:31:09.864 | 99.99th=[ 318] 00:31:09.864 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:09.864 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:09.864 lat (usec) : 250=95.69%, 500=0.19% 00:31:09.864 lat (msec) : 50=4.12% 00:31:09.864 cpu : usr=0.60%, sys=0.70%, ctx=537, majf=0, minf=1 00:31:09.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.864 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:09.864 00:31:09.864 Run status group 0 (all jobs): 00:31:09.864 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:31:09.864 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:31:09.864 00:31:09.864 Disk stats (read/write): 00:31:09.864 nvme0n1: ios=45/512, merge=0/0, ticks=1766/64, in_queue=1830, util=98.30% 00:31:09.864 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:09.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.864 rmmod nvme_tcp 00:31:09.864 rmmod nvme_fabrics 00:31:09.864 rmmod nvme_keyring 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1436151 ']' 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1436151 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1436151 ']' 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1436151 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1436151 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1436151' 00:31:09.864 killing process with pid 1436151 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1436151 00:31:09.864 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1436151 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.123 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.658 00:31:12.658 real 0m13.026s 00:31:12.658 user 0m23.803s 00:31:12.658 sys 0m6.105s 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 ************************************ 00:31:12.658 END TEST nvmf_nmic 00:31:12.658 ************************************ 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 ************************************ 00:31:12.658 START TEST nvmf_fio_target 00:31:12.658 ************************************ 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:12.658 * Looking for test storage... 00:31:12.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.658 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.659 --rc genhtml_branch_coverage=1 00:31:12.659 --rc genhtml_function_coverage=1 00:31:12.659 --rc genhtml_legend=1 00:31:12.659 --rc geninfo_all_blocks=1 00:31:12.659 --rc geninfo_unexecuted_blocks=1 00:31:12.659 00:31:12.659 ' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.659 --rc genhtml_branch_coverage=1 00:31:12.659 --rc genhtml_function_coverage=1 00:31:12.659 --rc genhtml_legend=1 00:31:12.659 --rc geninfo_all_blocks=1 00:31:12.659 --rc geninfo_unexecuted_blocks=1 00:31:12.659 00:31:12.659 ' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.659 --rc genhtml_branch_coverage=1 00:31:12.659 --rc genhtml_function_coverage=1 00:31:12.659 --rc genhtml_legend=1 00:31:12.659 --rc geninfo_all_blocks=1 00:31:12.659 --rc geninfo_unexecuted_blocks=1 00:31:12.659 00:31:12.659 ' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.659 --rc genhtml_branch_coverage=1 00:31:12.659 --rc genhtml_function_coverage=1 00:31:12.659 --rc genhtml_legend=1 00:31:12.659 --rc geninfo_all_blocks=1 00:31:12.659 --rc geninfo_unexecuted_blocks=1 00:31:12.659 00:31:12.659 ' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.659 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:19.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:19.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:19.228 Found net devices under 0000:86:00.0: cvl_0_0 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:19.228 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:19.229 Found net devices under 0000:86:00.1: cvl_0_1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:31:19.229 00:31:19.229 --- 10.0.0.2 ping statistics --- 00:31:19.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.229 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:31:19.229 00:31:19.229 --- 10.0.0.1 ping statistics --- 00:31:19.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.229 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1440527 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1440527 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1440527 ']' 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.229 [2024-10-15 13:11:38.653862] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:19.229 [2024-10-15 13:11:38.654784] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:31:19.229 [2024-10-15 13:11:38.654819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.229 [2024-10-15 13:11:38.727276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.229 [2024-10-15 13:11:38.767202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.229 [2024-10-15 13:11:38.767240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.229 [2024-10-15 13:11:38.767248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.229 [2024-10-15 13:11:38.767254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.229 [2024-10-15 13:11:38.767258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.229 [2024-10-15 13:11:38.768738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.229 [2024-10-15 13:11:38.768873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.229 [2024-10-15 13:11:38.768956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.229 [2024-10-15 13:11:38.768957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.229 [2024-10-15 13:11:38.836238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.229 [2024-10-15 13:11:38.837194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.229 [2024-10-15 13:11:38.837370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:19.229 [2024-10-15 13:11:38.838003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:19.229 [2024-10-15 13:11:38.838030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.229 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.229 [2024-10-15 13:11:39.081707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.229 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.229 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:19.229 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.229 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:19.229 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.489 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:19.489 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.750 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:19.750 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:20.008 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.267 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:20.267 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.267 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:20.267 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.526 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:20.526 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:20.786 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.045 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:21.045 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.045 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:21.045 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:21.304 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.563 [2024-10-15 13:11:41.661625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.563 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:21.563 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:21.822 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:31:22.080 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:31:23.985 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:23.985 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:23.986 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:23.986 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:31:23.986 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:23.986 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:31:23.986 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:24.262 [global] 00:31:24.262 thread=1 00:31:24.262 invalidate=1 00:31:24.262 rw=write 00:31:24.262 time_based=1 00:31:24.262 runtime=1 00:31:24.262 ioengine=libaio 00:31:24.262 direct=1 00:31:24.262 bs=4096 00:31:24.262 iodepth=1 00:31:24.262 norandommap=0 00:31:24.262 numjobs=1 00:31:24.262 00:31:24.262 verify_dump=1 00:31:24.262 verify_backlog=512 00:31:24.262 verify_state_save=0 00:31:24.262 do_verify=1 00:31:24.262 verify=crc32c-intel 00:31:24.262 [job0] 00:31:24.262 filename=/dev/nvme0n1 00:31:24.262 [job1] 00:31:24.262 filename=/dev/nvme0n2 00:31:24.262 [job2] 00:31:24.262 filename=/dev/nvme0n3 00:31:24.262 [job3] 00:31:24.262 filename=/dev/nvme0n4 00:31:24.262 Could not set queue depth (nvme0n1) 00:31:24.262 Could not set queue depth (nvme0n2) 00:31:24.262 Could not set queue depth (nvme0n3) 00:31:24.262 Could not set queue depth (nvme0n4) 00:31:24.523 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.523 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.523 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.523 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.523 fio-3.35 00:31:24.523 Starting 4 threads 00:31:25.901 00:31:25.901 job0: (groupid=0, jobs=1): err= 0: pid=1441650: Tue Oct 15 13:11:45 2024 00:31:25.901 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(9.88MiB/1001msec) 00:31:25.901 slat (nsec): min=2673, max=26983, avg=7324.47, stdev=1240.06 00:31:25.901 clat (usec): min=178, max=619, avg=220.77, stdev=29.70 00:31:25.901 lat (usec): min=181, max=627, avg=228.10, stdev=29.84 00:31:25.901 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:31:25.902 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 225], 00:31:25.902 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 255], 00:31:25.902 | 99.00th=[ 285], 99.50th=[ 400], 99.90th=[ 490], 99.95th=[ 553], 00:31:25.902 | 99.99th=[ 619] 00:31:25.902 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:25.902 slat (nsec): min=9503, max=38700, avg=10654.26, stdev=1480.11 00:31:25.902 clat (usec): min=114, max=322, avg=149.85, stdev=32.69 00:31:25.902 lat (usec): min=129, max=336, avg=160.50, stdev=33.01 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 130], 00:31:25.902 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:31:25.902 | 70.00th=[ 143], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 237], 00:31:25.902 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 289], 00:31:25.902 | 99.99th=[ 322] 00:31:25.902 bw ( KiB/s): min=12288, max=12288, per=77.70%, avg=12288.00, stdev= 0.00, samples=1 00:31:25.902 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:25.902 lat (usec) : 250=91.47%, 500=8.49%, 750=0.04% 00:31:25.902 cpu : usr=2.30%, sys=5.00%, ctx=5091, majf=0, minf=1 00:31:25.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 issued rwts: total=2530,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:25.902 job1: (groupid=0, jobs=1): err= 0: pid=1441651: Tue Oct 15 13:11:45 2024 00:31:25.902 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:31:25.902 slat (nsec): min=10596, max=23767, avg=21998.17, stdev=2609.02 00:31:25.902 clat (usec): min=40860, max=41964, avg=41018.37, stdev=213.08 00:31:25.902 lat (usec): min=40884, max=41985, avg=41040.37, stdev=212.59 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:25.902 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:25.902 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:25.902 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:25.902 | 99.99th=[42206] 00:31:25.902 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:31:25.902 slat (nsec): min=9680, max=36797, avg=11341.57, stdev=2538.29 00:31:25.902 clat (usec): min=139, max=308, avg=163.34, stdev=11.32 00:31:25.902 lat (usec): min=151, max=345, avg=174.68, stdev=12.07 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:31:25.902 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 165], 00:31:25.902 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 180], 00:31:25.902 | 99.00th=[ 192], 99.50th=[ 202], 99.90th=[ 310], 99.95th=[ 310], 00:31:25.902 | 99.99th=[ 310] 00:31:25.902 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:31:25.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:25.902 lat (usec) : 250=95.51%, 500=0.19% 00:31:25.902 lat (msec) : 50=4.30% 00:31:25.902 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:31:25.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:25.902 job2: (groupid=0, jobs=1): err= 0: pid=1441652: Tue Oct 15 13:11:45 2024 00:31:25.902 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:31:25.902 slat (nsec): min=9849, max=25625, avg=21751.55, stdev=4035.88 00:31:25.902 clat (usec): min=40632, max=41978, avg=40998.36, stdev=239.29 00:31:25.902 lat (usec): min=40642, max=42001, avg=41020.11, stdev=240.28 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:25.902 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:25.902 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:25.902 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:25.902 | 99.99th=[42206] 00:31:25.902 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:31:25.902 slat (nsec): min=8434, max=34872, avg=12614.52, stdev=2708.62 00:31:25.902 clat (usec): min=123, max=423, avg=221.60, stdev=41.10 00:31:25.902 lat (usec): min=133, max=457, avg=234.22, stdev=42.27 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 186], 00:31:25.902 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:31:25.902 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 260], 00:31:25.902 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 424], 99.95th=[ 424], 00:31:25.902 | 99.99th=[ 424] 00:31:25.902 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:31:25.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:25.902 lat (usec) : 250=86.89%, 500=8.99% 00:31:25.902 lat (msec) : 50=4.12% 00:31:25.902 cpu : usr=0.29%, sys=0.59%, ctx=536, majf=0, minf=1 00:31:25.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:25.902 job3: (groupid=0, jobs=1): err= 0: pid=1441653: Tue Oct 15 13:11:45 2024 00:31:25.902 read: IOPS=288, BW=1155KiB/s (1183kB/s)(1156KiB/1001msec) 00:31:25.902 slat (nsec): min=6884, max=23469, avg=9153.86, stdev=3776.71 00:31:25.902 clat (usec): min=223, max=41150, avg=3103.82, stdev=10343.80 00:31:25.902 lat (usec): min=231, max=41161, avg=3112.98, stdev=10347.17 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 227], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:31:25.902 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 289], 60.00th=[ 289], 00:31:25.902 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[41157], 00:31:25.902 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:25.902 | 99.99th=[41157] 00:31:25.902 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:25.902 slat (nsec): min=9606, max=42554, avg=10701.72, stdev=1810.55 00:31:25.902 clat (usec): min=125, max=270, avg=181.98, stdev=28.29 00:31:25.902 lat (usec): min=136, max=293, avg=192.68, stdev=28.46 00:31:25.902 clat percentiles (usec): 00:31:25.902 | 1.00th=[ 133], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:31:25.902 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 182], 00:31:25.902 | 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 239], 00:31:25.902 | 99.00th=[ 258], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 269], 00:31:25.902 | 99.99th=[ 269] 00:31:25.902 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:31:25.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:25.902 lat (usec) : 250=63.42%, 500=34.08% 00:31:25.902 lat (msec) : 50=2.50% 00:31:25.902 cpu : usr=0.50%, sys=0.70%, ctx=801, majf=0, minf=2 00:31:25.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.902 issued rwts: total=289,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:25.902 00:31:25.902 Run status group 0 (all jobs): 00:31:25.902 READ: bw=10.8MiB/s (11.3MB/s), 85.9KiB/s-9.87MiB/s (87.9kB/s-10.4MB/s), io=11.2MiB (11.7MB), run=1001-1036msec 00:31:25.902 WRITE: bw=15.4MiB/s (16.2MB/s), 1977KiB/s-9.99MiB/s (2024kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1036msec 00:31:25.902 00:31:25.902 Disk stats (read/write): 00:31:25.902 nvme0n1: ios=2100/2187, merge=0/0, ticks=1246/328, in_queue=1574, util=95.99% 00:31:25.902 nvme0n2: ios=42/512, merge=0/0, ticks=1723/82, in_queue=1805, util=96.54% 00:31:25.902 nvme0n3: ios=40/512, merge=0/0, ticks=1641/112, in_queue=1753, util=96.39% 00:31:25.902 nvme0n4: ios=17/512, merge=0/0, ticks=697/89, in_queue=786, util=89.61% 00:31:25.902 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:25.902 [global] 00:31:25.902 thread=1 00:31:25.902 invalidate=1 00:31:25.902 rw=randwrite 00:31:25.902 time_based=1 00:31:25.902 runtime=1 00:31:25.902 ioengine=libaio 00:31:25.902 direct=1 00:31:25.902 bs=4096 00:31:25.902 iodepth=1 00:31:25.902 norandommap=0 00:31:25.902 numjobs=1 00:31:25.902 00:31:25.902 verify_dump=1 00:31:25.902 verify_backlog=512 00:31:25.902 verify_state_save=0 00:31:25.902 do_verify=1 00:31:25.902 verify=crc32c-intel 00:31:25.902 [job0] 00:31:25.902 filename=/dev/nvme0n1 00:31:25.902 [job1] 00:31:25.902 filename=/dev/nvme0n2 00:31:25.902 [job2] 00:31:25.902 filename=/dev/nvme0n3 00:31:25.902 [job3] 00:31:25.902 filename=/dev/nvme0n4 00:31:25.902 Could not set queue depth (nvme0n1) 00:31:25.902 Could not set queue depth (nvme0n2) 00:31:25.902 Could not set queue depth (nvme0n3) 00:31:25.902 Could not set queue depth (nvme0n4) 00:31:26.160 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.160 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.160 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.161 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.161 fio-3.35 00:31:26.161 Starting 4 threads 00:31:27.539 00:31:27.539 job0: (groupid=0, jobs=1): err= 0: pid=1442017: Tue Oct 15 13:11:47 2024 00:31:27.539 read: IOPS=2525, BW=9.86MiB/s (10.3MB/s)(9.88MiB/1001msec) 00:31:27.539 slat (nsec): min=7108, max=42104, avg=8336.80, stdev=1976.81 00:31:27.539 clat (usec): min=172, max=752, avg=219.03, stdev=32.05 00:31:27.539 lat (usec): min=179, max=761, avg=227.37, stdev=32.37 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:31:27.539 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:31:27.539 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 255], 95.00th=[ 281], 00:31:27.539 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 510], 99.95th=[ 627], 00:31:27.539 | 99.99th=[ 750] 00:31:27.539 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:27.539 slat (nsec): min=10382, max=45606, avg=11697.22, stdev=2044.61 00:31:27.539 clat (usec): min=107, max=472, avg=148.36, stdev=23.26 00:31:27.539 lat (usec): min=135, max=483, avg=160.06, stdev=23.52 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:31:27.539 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:31:27.539 | 70.00th=[ 153], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 194], 00:31:27.539 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 289], 99.95th=[ 334], 00:31:27.539 | 99.99th=[ 474] 00:31:27.539 bw ( KiB/s): min=12288, max=12288, per=68.60%, avg=12288.00, stdev= 0.00, samples=1 00:31:27.539 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:27.539 lat (usec) : 250=94.34%, 500=5.60%, 750=0.04%, 1000=0.02% 00:31:27.539 cpu : usr=3.60%, sys=8.70%, ctx=5093, majf=0, minf=1 00:31:27.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.539 issued rwts: total=2528,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.539 job1: (groupid=0, jobs=1): err= 0: pid=1442018: Tue Oct 15 13:11:47 2024 00:31:27.539 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:31:27.539 slat (nsec): min=9761, max=26743, avg=22454.59, stdev=3198.57 00:31:27.539 clat (usec): min=40661, max=41064, avg=40957.88, stdev=82.17 00:31:27.539 lat (usec): min=40671, max=41087, avg=40980.34, stdev=84.35 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:27.539 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:27.539 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:27.539 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:27.539 | 99.99th=[41157] 00:31:27.539 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:31:27.539 slat (nsec): min=9749, max=40158, avg=11017.25, stdev=1992.30 00:31:27.539 clat (usec): min=155, max=431, avg=234.66, stdev=21.38 00:31:27.539 lat (usec): min=165, max=441, avg=245.68, stdev=21.85 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[ 167], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:31:27.539 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:31:27.539 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:31:27.539 | 99.00th=[ 297], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 433], 00:31:27.539 | 99.99th=[ 433] 00:31:27.539 bw ( KiB/s): min= 4096, max= 4096, per=22.87%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.539 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.539 lat (usec) : 250=83.71%, 500=12.17% 00:31:27.539 lat (msec) : 50=4.12% 00:31:27.539 cpu : usr=0.19%, sys=0.58%, ctx=535, majf=0, minf=1 00:31:27.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.539 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.539 job2: (groupid=0, jobs=1): err= 0: pid=1442020: Tue Oct 15 13:11:47 2024 00:31:27.539 read: IOPS=281, BW=1127KiB/s (1154kB/s)(1148KiB/1019msec) 00:31:27.539 slat (nsec): min=7007, max=25633, avg=9291.13, stdev=4033.00 00:31:27.539 clat (usec): min=188, max=41496, avg=3097.99, stdev=10395.83 00:31:27.539 lat (usec): min=196, max=41505, avg=3107.28, stdev=10397.92 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:31:27.539 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 265], 60.00th=[ 293], 00:31:27.539 | 70.00th=[ 297], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[40633], 00:31:27.539 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:27.539 | 99.99th=[41681] 00:31:27.539 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:31:27.539 slat (nsec): min=9935, max=39528, avg=11503.18, stdev=1854.05 00:31:27.539 clat (usec): min=163, max=430, avg=231.66, stdev=20.94 00:31:27.539 lat (usec): min=173, max=442, avg=243.16, stdev=21.42 00:31:27.539 clat percentiles (usec): 00:31:27.539 | 1.00th=[ 180], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:31:27.539 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 237], 00:31:27.539 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 255], 00:31:27.539 | 99.00th=[ 273], 99.50th=[ 347], 99.90th=[ 433], 99.95th=[ 433], 00:31:27.539 | 99.99th=[ 433] 00:31:27.539 bw ( KiB/s): min= 4096, max= 4096, per=22.87%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.539 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.539 lat (usec) : 250=74.47%, 500=23.03% 00:31:27.539 lat (msec) : 50=2.50% 00:31:27.539 cpu : usr=0.49%, sys=0.79%, ctx=800, majf=0, minf=1 00:31:27.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.540 issued rwts: total=287,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.540 job3: (groupid=0, jobs=1): err= 0: pid=1442021: Tue Oct 15 13:11:47 2024 00:31:27.540 read: IOPS=526, BW=2105KiB/s (2156kB/s)(2116KiB/1005msec) 00:31:27.540 slat (nsec): min=7464, max=35895, avg=9564.19, stdev=3404.83 00:31:27.540 clat (usec): min=188, max=41372, avg=1489.19, stdev=6956.48 00:31:27.540 lat (usec): min=196, max=41381, avg=1498.75, stdev=6956.89 00:31:27.540 clat percentiles (usec): 00:31:27.540 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 223], 00:31:27.540 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:31:27.540 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 343], 00:31:27.540 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:27.540 | 99.99th=[41157] 00:31:27.540 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:31:27.540 slat (nsec): min=9762, max=37837, avg=11621.70, stdev=1778.11 00:31:27.540 clat (usec): min=128, max=401, avg=190.88, stdev=37.91 00:31:27.540 lat (usec): min=139, max=413, avg=202.51, stdev=37.87 00:31:27.540 clat percentiles (usec): 00:31:27.540 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:31:27.540 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 202], 00:31:27.540 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:31:27.540 | 99.00th=[ 260], 99.50th=[ 289], 99.90th=[ 343], 99.95th=[ 400], 00:31:27.540 | 99.99th=[ 400] 00:31:27.540 bw ( KiB/s): min= 4096, max= 4096, per=22.87%, avg=4096.00, stdev= 0.00, samples=2 00:31:27.540 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:31:27.540 lat (usec) : 250=80.10%, 500=18.48%, 750=0.26%, 1000=0.06% 00:31:27.540 lat (msec) : 4=0.06%, 50=1.03% 00:31:27.540 cpu : usr=0.60%, sys=1.89%, ctx=1554, majf=0, minf=1 00:31:27.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.540 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.540 00:31:27.540 Run status group 0 (all jobs): 00:31:27.540 READ: bw=12.8MiB/s (13.4MB/s), 85.5KiB/s-9.86MiB/s (87.6kB/s-10.3MB/s), io=13.1MiB (13.8MB), run=1001-1029msec 00:31:27.540 WRITE: bw=17.5MiB/s (18.3MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1029msec 00:31:27.540 00:31:27.540 Disk stats (read/write): 00:31:27.540 nvme0n1: ios=2072/2177, merge=0/0, ticks=1295/305, in_queue=1600, util=84.37% 00:31:27.540 nvme0n2: ios=66/512, merge=0/0, ticks=1497/111, in_queue=1608, util=88.39% 00:31:27.540 nvme0n3: ios=305/512, merge=0/0, ticks=1591/114, in_queue=1705, util=92.50% 00:31:27.540 nvme0n4: ios=581/1024, merge=0/0, ticks=1231/189, in_queue=1420, util=94.15% 00:31:27.540 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:27.540 [global] 00:31:27.540 thread=1 00:31:27.540 invalidate=1 00:31:27.540 rw=write 00:31:27.540 time_based=1 00:31:27.540 runtime=1 00:31:27.540 ioengine=libaio 00:31:27.540 direct=1 00:31:27.540 bs=4096 00:31:27.540 iodepth=128 00:31:27.540 norandommap=0 00:31:27.540 numjobs=1 00:31:27.540 00:31:27.540 verify_dump=1 00:31:27.540 verify_backlog=512 00:31:27.540 verify_state_save=0 00:31:27.540 do_verify=1 00:31:27.540 verify=crc32c-intel 00:31:27.540 [job0] 00:31:27.540 filename=/dev/nvme0n1 00:31:27.540 [job1] 00:31:27.540 filename=/dev/nvme0n2 00:31:27.540 [job2] 00:31:27.540 filename=/dev/nvme0n3 00:31:27.540 [job3] 00:31:27.540 filename=/dev/nvme0n4 00:31:27.540 Could not set queue depth (nvme0n1) 00:31:27.540 Could not set queue depth (nvme0n2) 00:31:27.540 Could not set queue depth (nvme0n3) 00:31:27.540 Could not set queue depth (nvme0n4) 00:31:27.540 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.540 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.540 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.540 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.540 fio-3.35 00:31:27.540 Starting 4 threads 00:31:28.919 00:31:28.919 job0: (groupid=0, jobs=1): err= 0: pid=1442394: Tue Oct 15 13:11:49 2024 00:31:28.919 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:31:28.919 slat (nsec): min=1304, max=19314k, avg=145107.63, stdev=1050597.49 00:31:28.919 clat (usec): min=4504, max=68398, avg=15527.08, stdev=9065.65 00:31:28.919 lat (usec): min=4515, max=68409, avg=15672.19, stdev=9190.18 00:31:28.919 clat percentiles (usec): 00:31:28.919 | 1.00th=[ 4490], 5.00th=[ 6849], 10.00th=[ 6915], 20.00th=[ 8848], 00:31:28.919 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13960], 00:31:28.919 | 70.00th=[15795], 80.00th=[21627], 90.00th=[28443], 95.00th=[30278], 00:31:28.919 | 99.00th=[52167], 99.50th=[63177], 99.90th=[68682], 99.95th=[68682], 00:31:28.919 | 99.99th=[68682] 00:31:28.919 write: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(9.93MiB/1005msec); 0 zone resets 00:31:28.919 slat (usec): min=2, max=36531, avg=266.46, stdev=1430.11 00:31:28.919 clat (usec): min=1012, max=112948, avg=37627.88, stdev=24913.38 00:31:28.919 lat (usec): min=1020, max=112957, avg=37894.34, stdev=25070.90 00:31:28.919 clat percentiles (msec): 00:31:28.919 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 13], 00:31:28.919 | 30.00th=[ 20], 40.00th=[ 25], 50.00th=[ 37], 60.00th=[ 43], 00:31:28.919 | 70.00th=[ 55], 80.00th=[ 58], 90.00th=[ 69], 95.00th=[ 89], 00:31:28.919 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 113], 99.95th=[ 113], 00:31:28.919 | 99.99th=[ 113] 00:31:28.919 bw ( KiB/s): min= 8888, max=10432, per=15.70%, avg=9660.00, stdev=1091.77, samples=2 00:31:28.919 iops : min= 2222, max= 2608, avg=2415.00, stdev=272.94, samples=2 00:31:28.919 lat (msec) : 2=0.04%, 4=0.02%, 10=21.39%, 20=30.95%, 50=27.40% 00:31:28.919 lat (msec) : 100=19.30%, 250=0.89% 00:31:28.919 cpu : usr=1.79%, sys=2.49%, ctx=261, majf=0, minf=2 00:31:28.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:28.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:28.919 issued rwts: total=2048,2543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:28.919 job1: (groupid=0, jobs=1): err= 0: pid=1442395: Tue Oct 15 13:11:49 2024 00:31:28.919 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:31:28.919 slat (nsec): min=1314, max=17428k, avg=142064.37, stdev=998047.08 00:31:28.919 clat (usec): min=5245, max=49962, avg=17940.86, stdev=8208.55 00:31:28.919 lat (usec): min=5252, max=51917, avg=18082.92, stdev=8281.11 00:31:28.919 clat percentiles (usec): 00:31:28.919 | 1.00th=[ 7701], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:31:28.919 | 30.00th=[12256], 40.00th=[13304], 50.00th=[16319], 60.00th=[19268], 00:31:28.919 | 70.00th=[20055], 80.00th=[28967], 90.00th=[29492], 95.00th=[31851], 00:31:28.919 | 99.00th=[42206], 99.50th=[44827], 99.90th=[50070], 99.95th=[50070], 00:31:28.919 | 99.99th=[50070] 00:31:28.919 write: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(11.9MiB/1011msec); 0 zone resets 00:31:28.919 slat (usec): min=2, max=16728, avg=198.92, stdev=981.08 00:31:28.919 clat (usec): min=387, max=99255, avg=27039.25, stdev=20764.39 00:31:28.919 lat (usec): min=417, max=99264, avg=27238.17, stdev=20906.37 00:31:28.919 clat percentiles (usec): 00:31:28.919 | 1.00th=[ 963], 5.00th=[ 4621], 10.00th=[ 7439], 20.00th=[12911], 00:31:28.919 | 30.00th=[13566], 40.00th=[17433], 50.00th=[19792], 60.00th=[23987], 00:31:28.919 | 70.00th=[31327], 80.00th=[42206], 90.00th=[55313], 95.00th=[69731], 00:31:28.919 | 99.00th=[96994], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:31:28.919 | 99.99th=[99091] 00:31:28.919 bw ( KiB/s): min=11264, max=12016, per=18.92%, avg=11640.00, stdev=531.74, samples=2 00:31:28.919 iops : min= 2816, max= 3004, avg=2910.00, stdev=132.94, samples=2 00:31:28.919 lat (usec) : 500=0.14%, 1000=0.54% 00:31:28.919 lat (msec) : 2=0.54%, 4=1.02%, 10=14.38%, 20=40.71%, 50=34.73% 00:31:28.919 lat (msec) : 100=7.95% 00:31:28.919 cpu : usr=2.38%, sys=3.66%, ctx=312, majf=0, minf=1 00:31:28.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:28.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:28.919 issued rwts: total=2560,3038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:28.919 job2: (groupid=0, jobs=1): err= 0: pid=1442396: Tue Oct 15 13:11:49 2024 00:31:28.919 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:31:28.920 slat (nsec): min=1412, max=15410k, avg=109912.00, stdev=843376.50 00:31:28.920 clat (usec): min=4486, max=59867, avg=13838.39, stdev=7151.32 00:31:28.920 lat (usec): min=4493, max=59876, avg=13948.31, stdev=7250.15 00:31:28.920 clat percentiles (usec): 00:31:28.920 | 1.00th=[ 5997], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 8586], 00:31:28.920 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[12780], 60.00th=[13435], 00:31:28.920 | 70.00th=[14353], 80.00th=[16581], 90.00th=[26346], 95.00th=[27395], 00:31:28.920 | 99.00th=[39584], 99.50th=[50594], 99.90th=[60031], 99.95th=[60031], 00:31:28.920 | 99.99th=[60031] 00:31:28.920 write: IOPS=4475, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1008msec); 0 zone resets 00:31:28.920 slat (usec): min=2, max=13309, avg=114.89, stdev=634.48 00:31:28.920 clat (usec): min=4155, max=60505, avg=15754.45, stdev=9797.26 00:31:28.920 lat (usec): min=4166, max=61333, avg=15869.34, stdev=9851.87 00:31:28.920 clat percentiles (usec): 00:31:28.920 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[ 9110], 00:31:28.920 | 30.00th=[ 9896], 40.00th=[12387], 50.00th=[13960], 60.00th=[14877], 00:31:28.920 | 70.00th=[16450], 80.00th=[19530], 90.00th=[26084], 95.00th=[39060], 00:31:28.920 | 99.00th=[52691], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:31:28.920 | 99.99th=[60556] 00:31:28.920 bw ( KiB/s): min=10488, max=24576, per=28.50%, avg=17532.00, stdev=9961.72, samples=2 00:31:28.920 iops : min= 2622, max= 6144, avg=4383.00, stdev=2490.43, samples=2 00:31:28.920 lat (msec) : 10=35.69%, 20=47.87%, 50=15.27%, 100=1.17% 00:31:28.920 cpu : usr=3.77%, sys=6.65%, ctx=361, majf=0, minf=1 00:31:28.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:28.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:28.920 issued rwts: total=4096,4511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:28.920 job3: (groupid=0, jobs=1): err= 0: pid=1442397: Tue Oct 15 13:11:49 2024 00:31:28.920 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:31:28.920 slat (nsec): min=1491, max=18983k, avg=84347.39, stdev=665082.32 00:31:28.920 clat (usec): min=3355, max=39284, avg=10804.41, stdev=3943.00 00:31:28.920 lat (usec): min=3895, max=39312, avg=10888.76, stdev=4003.59 00:31:28.920 clat percentiles (usec): 00:31:28.920 | 1.00th=[ 4490], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8717], 00:31:28.920 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9896], 00:31:28.920 | 70.00th=[10552], 80.00th=[11600], 90.00th=[18744], 95.00th=[20317], 00:31:28.920 | 99.00th=[23987], 99.50th=[23987], 99.90th=[26870], 99.95th=[36963], 00:31:28.920 | 99.99th=[39060] 00:31:28.920 write: IOPS=5427, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1005msec); 0 zone resets 00:31:28.920 slat (usec): min=2, max=14860, avg=88.78, stdev=612.30 00:31:28.920 clat (usec): min=415, max=61893, avg=13222.55, stdev=10148.40 00:31:28.920 lat (usec): min=425, max=61899, avg=13311.33, stdev=10208.43 00:31:28.920 clat percentiles (usec): 00:31:28.920 | 1.00th=[ 717], 5.00th=[ 4113], 10.00th=[ 6456], 20.00th=[ 8225], 00:31:28.920 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:31:28.920 | 70.00th=[10421], 80.00th=[18744], 90.00th=[25822], 95.00th=[38011], 00:31:28.920 | 99.00th=[50070], 99.50th=[54789], 99.90th=[62129], 99.95th=[62129], 00:31:28.920 | 99.99th=[62129] 00:31:28.920 bw ( KiB/s): min=18536, max=24088, per=34.65%, avg=21312.00, stdev=3925.86, samples=2 00:31:28.920 iops : min= 4634, max= 6022, avg=5328.00, stdev=981.46, samples=2 00:31:28.920 lat (usec) : 500=0.03%, 750=0.57%, 1000=0.11% 00:31:28.920 lat (msec) : 2=0.02%, 4=1.85%, 10=62.95%, 20=23.04%, 50=10.80% 00:31:28.920 lat (msec) : 100=0.62% 00:31:28.920 cpu : usr=4.38%, sys=6.47%, ctx=445, majf=0, minf=1 00:31:28.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:28.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:28.920 issued rwts: total=5120,5455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:28.920 00:31:28.920 Run status group 0 (all jobs): 00:31:28.920 READ: bw=53.4MiB/s (56.0MB/s), 8151KiB/s-19.9MiB/s (8347kB/s-20.9MB/s), io=54.0MiB (56.6MB), run=1005-1011msec 00:31:28.920 WRITE: bw=60.1MiB/s (63.0MB/s), 9.88MiB/s-21.2MiB/s (10.4MB/s-22.2MB/s), io=60.7MiB (63.7MB), run=1005-1011msec 00:31:28.920 00:31:28.920 Disk stats (read/write): 00:31:28.920 nvme0n1: ios=1586/1967, merge=0/0, ticks=14795/49897, in_queue=64692, util=86.67% 00:31:28.920 nvme0n2: ios=2085/2295, merge=0/0, ticks=25237/31840, in_queue=57077, util=99.29% 00:31:28.920 nvme0n3: ios=3767/4096, merge=0/0, ticks=35272/43875, in_queue=79147, util=98.34% 00:31:28.920 nvme0n4: ios=4503/4608, merge=0/0, ticks=36690/46125, in_queue=82815, util=97.80% 00:31:28.920 13:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:28.920 [global] 00:31:28.920 thread=1 00:31:28.920 invalidate=1 00:31:28.920 rw=randwrite 00:31:28.920 time_based=1 00:31:28.920 runtime=1 00:31:28.920 ioengine=libaio 00:31:28.920 direct=1 00:31:28.920 bs=4096 00:31:28.920 iodepth=128 00:31:28.920 norandommap=0 00:31:28.920 numjobs=1 00:31:28.920 00:31:28.920 verify_dump=1 00:31:28.920 verify_backlog=512 00:31:28.920 verify_state_save=0 00:31:28.920 do_verify=1 00:31:28.920 verify=crc32c-intel 00:31:28.920 [job0] 00:31:28.920 filename=/dev/nvme0n1 00:31:28.920 [job1] 00:31:28.920 filename=/dev/nvme0n2 00:31:28.920 [job2] 00:31:28.920 filename=/dev/nvme0n3 00:31:28.920 [job3] 00:31:28.920 filename=/dev/nvme0n4 00:31:28.920 Could not set queue depth (nvme0n1) 00:31:28.920 Could not set queue depth (nvme0n2) 00:31:28.920 Could not set queue depth (nvme0n3) 00:31:28.920 Could not set queue depth (nvme0n4) 00:31:29.179 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.179 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.179 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.179 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.179 fio-3.35 00:31:29.179 Starting 4 threads 00:31:30.557 00:31:30.557 job0: (groupid=0, jobs=1): err= 0: pid=1442763: Tue Oct 15 13:11:50 2024 00:31:30.557 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:31:30.557 slat (nsec): min=1355, max=14738k, avg=109003.82, stdev=768753.07 00:31:30.557 clat (usec): min=3660, max=54150, avg=14251.54, stdev=6051.39 00:31:30.557 lat (usec): min=3671, max=54162, avg=14360.54, stdev=6106.40 00:31:30.557 clat percentiles (usec): 00:31:30.557 | 1.00th=[ 5932], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[10159], 00:31:30.557 | 30.00th=[11469], 40.00th=[13042], 50.00th=[13960], 60.00th=[14353], 00:31:30.557 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19006], 95.00th=[21365], 00:31:30.557 | 99.00th=[40109], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:31:30.557 | 99.99th=[54264] 00:31:30.557 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1009msec); 0 zone resets 00:31:30.557 slat (usec): min=2, max=14192, avg=101.70, stdev=693.16 00:31:30.557 clat (usec): min=3399, max=54283, avg=13308.22, stdev=5870.12 00:31:30.557 lat (usec): min=3411, max=54315, avg=13409.92, stdev=5924.08 00:31:30.557 clat percentiles (usec): 00:31:30.557 | 1.00th=[ 4752], 5.00th=[ 7111], 10.00th=[ 8029], 20.00th=[10028], 00:31:30.558 | 30.00th=[11207], 40.00th=[12256], 50.00th=[12518], 60.00th=[13173], 00:31:30.558 | 70.00th=[14091], 80.00th=[14484], 90.00th=[16450], 95.00th=[22676], 00:31:30.558 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[45876], 00:31:30.558 | 99.99th=[54264] 00:31:30.558 bw ( KiB/s): min=16384, max=20480, per=24.25%, avg=18432.00, stdev=2896.31, samples=2 00:31:30.558 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:30.558 lat (msec) : 4=0.26%, 10=19.39%, 20=74.54%, 50=5.77%, 100=0.03% 00:31:30.558 cpu : usr=3.57%, sys=6.25%, ctx=378, majf=0, minf=1 00:31:30.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:30.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.558 issued rwts: total=4608,4622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.558 job1: (groupid=0, jobs=1): err= 0: pid=1442764: Tue Oct 15 13:11:50 2024 00:31:30.558 read: IOPS=5585, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1006msec) 00:31:30.558 slat (nsec): min=1506, max=10884k, avg=84597.94, stdev=631353.04 00:31:30.558 clat (usec): min=5353, max=22157, avg=11357.75, stdev=3171.99 00:31:30.558 lat (usec): min=5356, max=22163, avg=11442.35, stdev=3200.94 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 5669], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8455], 00:31:30.558 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11863], 00:31:30.558 | 70.00th=[12911], 80.00th=[14484], 90.00th=[15664], 95.00th=[17171], 00:31:30.558 | 99.00th=[18744], 99.50th=[19792], 99.90th=[20841], 99.95th=[22152], 00:31:30.558 | 99.99th=[22152] 00:31:30.558 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:31:30.558 slat (usec): min=2, max=54210, avg=86.40, stdev=929.96 00:31:30.558 clat (usec): min=720, max=59529, avg=11284.68, stdev=7592.54 00:31:30.558 lat (usec): min=1559, max=59539, avg=11371.07, stdev=7614.67 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 4948], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7701], 00:31:30.558 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10945], 00:31:30.558 | 70.00th=[11994], 80.00th=[12518], 90.00th=[14484], 95.00th=[15795], 00:31:30.558 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:31:30.558 | 99.99th=[59507] 00:31:30.558 bw ( KiB/s): min=19936, max=25120, per=29.64%, avg=22528.00, stdev=3665.64, samples=2 00:31:30.558 iops : min= 4984, max= 6280, avg=5632.00, stdev=916.41, samples=2 00:31:30.558 lat (usec) : 750=0.01% 00:31:30.558 lat (msec) : 2=0.02%, 4=0.10%, 10=45.49%, 20=53.07%, 50=0.19% 00:31:30.558 lat (msec) : 100=1.13% 00:31:30.558 cpu : usr=4.78%, sys=6.87%, ctx=334, majf=0, minf=1 00:31:30.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:30.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.558 issued rwts: total=5619,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.558 job2: (groupid=0, jobs=1): err= 0: pid=1442770: Tue Oct 15 13:11:50 2024 00:31:30.558 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:31:30.558 slat (nsec): min=1374, max=13524k, avg=139609.34, stdev=900120.21 00:31:30.558 clat (usec): min=4511, max=49265, avg=15869.82, stdev=6250.80 00:31:30.558 lat (usec): min=4519, max=49274, avg=16009.43, stdev=6316.30 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 4686], 5.00th=[ 9110], 10.00th=[12649], 20.00th=[12911], 00:31:30.558 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13960], 60.00th=[14484], 00:31:30.558 | 70.00th=[15008], 80.00th=[18744], 90.00th=[22152], 95.00th=[26608], 00:31:30.558 | 99.00th=[44827], 99.50th=[45351], 99.90th=[49021], 99.95th=[49021], 00:31:30.558 | 99.99th=[49021] 00:31:30.558 write: IOPS=3479, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1011msec); 0 zone resets 00:31:30.558 slat (nsec): min=1969, max=11603k, avg=155230.65, stdev=667009.14 00:31:30.558 clat (usec): min=1471, max=103150, avg=22581.86, stdev=17875.30 00:31:30.558 lat (usec): min=1482, max=103162, avg=22737.09, stdev=17990.40 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 1958], 5.00th=[ 6128], 10.00th=[ 9241], 20.00th=[ 12518], 00:31:30.558 | 30.00th=[ 13042], 40.00th=[ 13304], 50.00th=[ 13566], 60.00th=[ 15139], 00:31:30.558 | 70.00th=[ 28967], 80.00th=[ 37487], 90.00th=[ 39060], 95.00th=[ 46924], 00:31:30.558 | 99.00th=[ 96994], 99.50th=[ 99091], 99.90th=[103285], 99.95th=[103285], 00:31:30.558 | 99.99th=[103285] 00:31:30.558 bw ( KiB/s): min= 9520, max=17608, per=17.85%, avg=13564.00, stdev=5719.08, samples=2 00:31:30.558 iops : min= 2380, max= 4402, avg=3391.00, stdev=1429.77, samples=2 00:31:30.558 lat (msec) : 2=0.64%, 4=0.46%, 10=7.86%, 20=64.31%, 50=24.32% 00:31:30.558 lat (msec) : 100=2.19%, 250=0.23% 00:31:30.558 cpu : usr=3.07%, sys=3.47%, ctx=472, majf=0, minf=2 00:31:30.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:30.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.558 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.558 job3: (groupid=0, jobs=1): err= 0: pid=1442772: Tue Oct 15 13:11:50 2024 00:31:30.558 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:31:30.558 slat (nsec): min=1658, max=6598.1k, avg=92993.93, stdev=502988.99 00:31:30.558 clat (usec): min=4492, max=22466, avg=11992.72, stdev=2046.63 00:31:30.558 lat (usec): min=4503, max=22481, avg=12085.71, stdev=2076.76 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 6259], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10290], 00:31:30.558 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:31:30.558 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14353], 95.00th=[15139], 00:31:30.558 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17433], 99.95th=[18220], 00:31:30.558 | 99.99th=[22414] 00:31:30.558 write: IOPS=5401, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1007msec); 0 zone resets 00:31:30.558 slat (usec): min=2, max=9793, avg=88.27, stdev=542.75 00:31:30.558 clat (usec): min=411, max=35102, avg=12085.51, stdev=2471.01 00:31:30.558 lat (usec): min=445, max=35111, avg=12173.78, stdev=2516.51 00:31:30.558 clat percentiles (usec): 00:31:30.558 | 1.00th=[ 4424], 5.00th=[ 8225], 10.00th=[10159], 20.00th=[10945], 00:31:30.558 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:31:30.558 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14877], 00:31:30.558 | 99.00th=[19006], 99.50th=[24249], 99.90th=[29754], 99.95th=[29754], 00:31:30.558 | 99.99th=[34866] 00:31:30.558 bw ( KiB/s): min=19096, max=23400, per=27.95%, avg=21248.00, stdev=3043.39, samples=2 00:31:30.558 iops : min= 4774, max= 5850, avg=5312.00, stdev=760.85, samples=2 00:31:30.558 lat (usec) : 500=0.02% 00:31:30.558 lat (msec) : 2=0.09%, 4=0.38%, 10=11.88%, 20=87.17%, 50=0.47% 00:31:30.558 cpu : usr=3.78%, sys=6.56%, ctx=501, majf=0, minf=1 00:31:30.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:30.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.558 issued rwts: total=5120,5439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.558 00:31:30.558 Run status group 0 (all jobs): 00:31:30.558 READ: bw=71.2MiB/s (74.6MB/s), 11.9MiB/s-21.8MiB/s (12.4MB/s-22.9MB/s), io=71.9MiB (75.4MB), run=1006-1011msec 00:31:30.558 WRITE: bw=74.2MiB/s (77.8MB/s), 13.6MiB/s-21.9MiB/s (14.3MB/s-22.9MB/s), io=75.0MiB (78.7MB), run=1006-1011msec 00:31:30.558 00:31:30.558 Disk stats (read/write): 00:31:30.558 nvme0n1: ios=3606/3967, merge=0/0, ticks=33998/34180, in_queue=68178, util=97.60% 00:31:30.558 nvme0n2: ios=4629/4916, merge=0/0, ticks=50393/47786, in_queue=98179, util=97.76% 00:31:30.558 nvme0n3: ios=2425/2560, merge=0/0, ticks=38600/66580, in_queue=105180, util=88.92% 00:31:30.558 nvme0n4: ios=4483/4608, merge=0/0, ticks=30844/29331, in_queue=60175, util=99.68% 00:31:30.558 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:30.558 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1443002 00:31:30.558 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:30.558 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:30.558 [global] 00:31:30.558 thread=1 00:31:30.558 invalidate=1 00:31:30.558 rw=read 00:31:30.558 time_based=1 00:31:30.558 runtime=10 00:31:30.558 ioengine=libaio 00:31:30.558 direct=1 00:31:30.558 bs=4096 00:31:30.558 iodepth=1 00:31:30.558 norandommap=1 00:31:30.558 numjobs=1 00:31:30.558 00:31:30.558 [job0] 00:31:30.558 filename=/dev/nvme0n1 00:31:30.558 [job1] 00:31:30.558 filename=/dev/nvme0n2 00:31:30.558 [job2] 00:31:30.558 filename=/dev/nvme0n3 00:31:30.558 [job3] 00:31:30.558 filename=/dev/nvme0n4 00:31:30.558 Could not set queue depth (nvme0n1) 00:31:30.558 Could not set queue depth (nvme0n2) 00:31:30.558 Could not set queue depth (nvme0n3) 00:31:30.558 Could not set queue depth (nvme0n4) 00:31:30.817 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.817 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.817 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.817 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.817 fio-3.35 00:31:30.817 Starting 4 threads 00:31:33.354 13:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:33.613 13:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:33.613 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=933888, buflen=4096 00:31:33.613 fio: pid=1443142, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:33.872 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:31:33.872 fio: pid=1443141, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:33.872 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:33.872 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:34.130 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:31:34.130 fio: pid=1443139, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.130 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.130 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:34.389 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14438400, buflen=4096 00:31:34.389 fio: pid=1443140, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.389 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.389 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:34.389 00:31:34.389 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1443139: Tue Oct 15 13:11:54 2024 00:31:34.389 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(304KiB/3114msec) 00:31:34.389 slat (usec): min=9, max=5915, avg=100.02, stdev=671.54 00:31:34.389 clat (usec): min=550, max=43905, avg=40577.16, stdev=4679.72 00:31:34.389 lat (usec): min=590, max=47101, avg=40678.21, stdev=4736.96 00:31:34.389 clat percentiles (usec): 00:31:34.389 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:34.389 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:34.389 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:34.389 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:31:34.389 | 99.99th=[43779] 00:31:34.389 bw ( KiB/s): min= 96, max= 104, per=2.09%, avg=98.00, stdev= 3.35, samples=6 00:31:34.389 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:31:34.389 lat (usec) : 750=1.30% 00:31:34.389 lat (msec) : 50=97.40% 00:31:34.389 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=2 00:31:34.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.389 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1443140: Tue Oct 15 13:11:54 2024 00:31:34.389 read: IOPS=1059, BW=4236KiB/s (4337kB/s)(13.8MiB/3329msec) 00:31:34.389 slat (usec): min=6, max=14588, avg=27.93, stdev=480.84 00:31:34.389 clat (usec): min=184, max=41692, avg=907.89, stdev=5187.44 00:31:34.389 lat (usec): min=190, max=41701, avg=935.82, stdev=5208.83 00:31:34.389 clat percentiles (usec): 00:31:34.389 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 223], 00:31:34.389 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:31:34.389 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:31:34.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:34.389 | 99.99th=[41681] 00:31:34.389 bw ( KiB/s): min= 104, max= 9120, per=75.97%, avg=3562.00, stdev=4038.15, samples=6 00:31:34.389 iops : min= 26, max= 2280, avg=890.50, stdev=1009.54, samples=6 00:31:34.389 lat (usec) : 250=85.34%, 500=12.90%, 750=0.09% 00:31:34.389 lat (msec) : 50=1.64% 00:31:34.389 cpu : usr=0.48%, sys=1.80%, ctx=3533, majf=0, minf=2 00:31:34.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 issued rwts: total=3526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.389 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1443141: Tue Oct 15 13:11:54 2024 00:31:34.389 read: IOPS=25, BW=101KiB/s (103kB/s)(292KiB/2894msec) 00:31:34.389 slat (nsec): min=6696, max=29742, avg=22157.22, stdev=2794.00 00:31:34.389 clat (usec): min=248, max=41968, avg=39324.98, stdev=8128.21 00:31:34.389 lat (usec): min=258, max=41991, avg=39347.13, stdev=8129.15 00:31:34.389 clat percentiles (usec): 00:31:34.389 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:34.389 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:34.389 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:34.389 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:34.389 | 99.99th=[42206] 00:31:34.389 bw ( KiB/s): min= 96, max= 120, per=2.18%, avg=102.40, stdev=10.43, samples=5 00:31:34.389 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:31:34.389 lat (usec) : 250=1.35%, 500=1.35%, 750=1.35% 00:31:34.389 lat (msec) : 50=94.59% 00:31:34.389 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:31:34.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.389 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.389 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1443142: Tue Oct 15 13:11:54 2024 00:31:34.389 read: IOPS=84, BW=338KiB/s (346kB/s)(912KiB/2698msec) 00:31:34.389 slat (nsec): min=6973, max=70907, avg=12306.57, stdev=7444.44 00:31:34.389 clat (usec): min=259, max=41108, avg=11725.02, stdev=18307.36 00:31:34.389 lat (usec): min=267, max=41129, avg=11737.28, stdev=18313.29 00:31:34.389 clat percentiles (usec): 00:31:34.389 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:31:34.389 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 416], 00:31:34.389 | 70.00th=[ 457], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:34.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:34.389 | 99.99th=[41157] 00:31:34.389 bw ( KiB/s): min= 96, max= 1392, per=7.59%, avg=356.80, stdev=578.70, samples=5 00:31:34.389 iops : min= 24, max= 348, avg=89.20, stdev=144.68, samples=5 00:31:34.389 lat (usec) : 500=70.31%, 750=1.31% 00:31:34.390 lat (msec) : 50=27.95% 00:31:34.390 cpu : usr=0.07%, sys=0.15%, ctx=230, majf=0, minf=1 00:31:34.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.390 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.390 issued rwts: total=229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.390 00:31:34.390 Run status group 0 (all jobs): 00:31:34.390 READ: bw=4688KiB/s (4801kB/s), 97.6KiB/s-4236KiB/s (100.0kB/s-4337kB/s), io=15.2MiB (16.0MB), run=2698-3329msec 00:31:34.390 00:31:34.390 Disk stats (read/write): 00:31:34.390 nvme0n1: ios=89/0, merge=0/0, ticks=3311/0, in_queue=3311, util=97.19% 00:31:34.390 nvme0n2: ios=2785/0, merge=0/0, ticks=2994/0, in_queue=2994, util=94.64% 00:31:34.390 nvme0n3: ios=117/0, merge=0/0, ticks=3110/0, in_queue=3110, util=98.85% 00:31:34.390 nvme0n4: ios=226/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.48% 00:31:34.390 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.390 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:34.649 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.649 13:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:34.907 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.907 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1443002 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:35.167 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:35.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:35.426 nvmf hotplug test: fio failed as expected 00:31:35.426 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.685 rmmod nvme_tcp 00:31:35.685 rmmod nvme_fabrics 00:31:35.685 rmmod nvme_keyring 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1440527 ']' 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1440527 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1440527 ']' 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1440527 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1440527 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1440527' 00:31:35.685 killing process with pid 1440527 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1440527 00:31:35.685 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1440527 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.945 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.480 00:31:38.480 real 0m25.707s 00:31:38.480 user 1m30.919s 00:31:38.480 sys 0m10.777s 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:38.480 ************************************ 00:31:38.480 END TEST nvmf_fio_target 00:31:38.480 ************************************ 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.480 ************************************ 00:31:38.480 START TEST nvmf_bdevio 00:31:38.480 ************************************ 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:38.480 * Looking for test storage... 00:31:38.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:38.480 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.481 --rc genhtml_branch_coverage=1 00:31:38.481 --rc genhtml_function_coverage=1 00:31:38.481 --rc genhtml_legend=1 00:31:38.481 --rc geninfo_all_blocks=1 00:31:38.481 --rc geninfo_unexecuted_blocks=1 00:31:38.481 00:31:38.481 ' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.481 --rc genhtml_branch_coverage=1 00:31:38.481 --rc genhtml_function_coverage=1 00:31:38.481 --rc genhtml_legend=1 00:31:38.481 --rc geninfo_all_blocks=1 00:31:38.481 --rc geninfo_unexecuted_blocks=1 00:31:38.481 00:31:38.481 ' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.481 --rc genhtml_branch_coverage=1 00:31:38.481 --rc genhtml_function_coverage=1 00:31:38.481 --rc genhtml_legend=1 00:31:38.481 --rc geninfo_all_blocks=1 00:31:38.481 --rc geninfo_unexecuted_blocks=1 00:31:38.481 00:31:38.481 ' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.481 --rc genhtml_branch_coverage=1 00:31:38.481 --rc genhtml_function_coverage=1 00:31:38.481 --rc genhtml_legend=1 00:31:38.481 --rc geninfo_all_blocks=1 00:31:38.481 --rc geninfo_unexecuted_blocks=1 00:31:38.481 00:31:38.481 ' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.481 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:43.758 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:43.758 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:43.758 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:43.759 Found net devices under 0000:86:00.0: cvl_0_0 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:43.759 Found net devices under 0000:86:00.1: cvl_0_1 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.759 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:31:44.019 00:31:44.019 --- 10.0.0.2 ping statistics --- 00:31:44.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.019 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:44.019 00:31:44.019 --- 10.0.0.1 ping statistics --- 00:31:44.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.019 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:44.019 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1447520 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1447520 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1447520 ']' 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.279 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.279 [2024-10-15 13:12:04.418461] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.279 [2024-10-15 13:12:04.419445] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:31:44.279 [2024-10-15 13:12:04.419484] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.279 [2024-10-15 13:12:04.492053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.279 [2024-10-15 13:12:04.534734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.279 [2024-10-15 13:12:04.534772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.279 [2024-10-15 13:12:04.534779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.279 [2024-10-15 13:12:04.534785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.279 [2024-10-15 13:12:04.534790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.279 [2024-10-15 13:12:04.536335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:44.279 [2024-10-15 13:12:04.536449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:44.279 [2024-10-15 13:12:04.536664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:44.279 [2024-10-15 13:12:04.536665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.539 [2024-10-15 13:12:04.603636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.539 [2024-10-15 13:12:04.604034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.539 [2024-10-15 13:12:04.604616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:44.539 [2024-10-15 13:12:04.605289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.539 [2024-10-15 13:12:04.605321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 [2024-10-15 13:12:04.669396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 Malloc0 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 [2024-10-15 13:12:04.753670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:44.539 { 00:31:44.539 "params": { 00:31:44.539 "name": "Nvme$subsystem", 00:31:44.539 "trtype": "$TEST_TRANSPORT", 00:31:44.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.539 "adrfam": "ipv4", 00:31:44.539 "trsvcid": "$NVMF_PORT", 00:31:44.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.539 "hdgst": ${hdgst:-false}, 00:31:44.539 "ddgst": ${ddgst:-false} 00:31:44.539 }, 00:31:44.539 "method": "bdev_nvme_attach_controller" 00:31:44.539 } 00:31:44.539 EOF 00:31:44.539 )") 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:31:44.539 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:44.539 "params": { 00:31:44.539 "name": "Nvme1", 00:31:44.539 "trtype": "tcp", 00:31:44.539 "traddr": "10.0.0.2", 00:31:44.539 "adrfam": "ipv4", 00:31:44.539 "trsvcid": "4420", 00:31:44.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.539 "hdgst": false, 00:31:44.539 "ddgst": false 00:31:44.539 }, 00:31:44.539 "method": "bdev_nvme_attach_controller" 00:31:44.539 }' 00:31:44.539 [2024-10-15 13:12:04.804643] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:31:44.539 [2024-10-15 13:12:04.804694] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447736 ] 00:31:44.798 [2024-10-15 13:12:04.875429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.798 [2024-10-15 13:12:04.919352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.798 [2024-10-15 13:12:04.919463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.798 [2024-10-15 13:12:04.919463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.056 I/O targets: 00:31:45.056 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:45.056 00:31:45.056 00:31:45.056 CUnit - A unit testing framework for C - Version 2.1-3 00:31:45.056 http://cunit.sourceforge.net/ 00:31:45.056 00:31:45.056 00:31:45.056 Suite: bdevio tests on: Nvme1n1 00:31:45.056 Test: blockdev write read block ...passed 00:31:45.056 Test: blockdev write zeroes read block ...passed 00:31:45.056 Test: blockdev write zeroes read no split ...passed 00:31:45.056 Test: blockdev write zeroes read split ...passed 00:31:45.056 Test: blockdev write zeroes read split partial ...passed 00:31:45.056 Test: blockdev reset ...[2024-10-15 13:12:05.339149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.056 [2024-10-15 13:12:05.339208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc2400 (9): Bad file descriptor 00:31:45.315 [2024-10-15 13:12:05.390803] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:45.315 passed 00:31:45.315 Test: blockdev write read 8 blocks ...passed 00:31:45.315 Test: blockdev write read size > 128k ...passed 00:31:45.315 Test: blockdev write read invalid size ...passed 00:31:45.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:45.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:45.315 Test: blockdev write read max offset ...passed 00:31:45.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:45.315 Test: blockdev writev readv 8 blocks ...passed 00:31:45.315 Test: blockdev writev readv 30 x 1block ...passed 00:31:45.574 Test: blockdev writev readv block ...passed 00:31:45.574 Test: blockdev writev readv size > 128k ...passed 00:31:45.574 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:45.574 Test: blockdev comparev and writev ...[2024-10-15 13:12:05.641522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.574 [2024-10-15 13:12:05.641553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.574 [2024-10-15 13:12:05.641572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.574 [2024-10-15 13:12:05.641579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:45.574 [2024-10-15 13:12:05.641879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.574 [2024-10-15 13:12:05.641890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:45.574 [2024-10-15 13:12:05.641902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.575 [2024-10-15 13:12:05.641909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.642193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.575 [2024-10-15 13:12:05.642203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.642214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.575 [2024-10-15 13:12:05.642221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.642495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.575 [2024-10-15 13:12:05.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.642518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.575 [2024-10-15 13:12:05.642526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:45.575 passed 00:31:45.575 Test: blockdev nvme passthru rw ...passed 00:31:45.575 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:12:05.724993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.575 [2024-10-15 13:12:05.725011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.725120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.575 [2024-10-15 13:12:05.725130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.725240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.575 [2024-10-15 13:12:05.725251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:45.575 [2024-10-15 13:12:05.725361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.575 [2024-10-15 13:12:05.725371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:45.575 passed 00:31:45.575 Test: blockdev nvme admin passthru ...passed 00:31:45.575 Test: blockdev copy ...passed 00:31:45.575 00:31:45.575 Run Summary: Type Total Ran Passed Failed Inactive 00:31:45.575 suites 1 1 n/a 0 0 00:31:45.575 tests 23 23 23 0 0 00:31:45.575 asserts 152 152 152 0 n/a 00:31:45.575 00:31:45.575 Elapsed time = 1.189 seconds 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.834 rmmod nvme_tcp 00:31:45.834 rmmod nvme_fabrics 00:31:45.834 rmmod nvme_keyring 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1447520 ']' 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1447520 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1447520 ']' 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1447520 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:45.834 13:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1447520 00:31:45.834 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:31:45.834 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:31:45.834 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1447520' 00:31:45.834 killing process with pid 1447520 00:31:45.834 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1447520 00:31:45.834 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1447520 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.093 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.997 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.997 00:31:47.997 real 0m10.034s 00:31:47.997 user 0m9.372s 00:31:47.997 sys 0m5.207s 00:31:47.997 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.997 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.997 ************************************ 00:31:47.997 END TEST nvmf_bdevio 00:31:47.997 ************************************ 00:31:48.256 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:48.256 00:31:48.256 real 4m32.444s 00:31:48.256 user 9m4.122s 00:31:48.256 sys 1m52.269s 00:31:48.256 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.256 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.256 ************************************ 00:31:48.256 END TEST nvmf_target_core_interrupt_mode 00:31:48.256 ************************************ 00:31:48.256 13:12:08 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:48.256 13:12:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:48.256 13:12:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:48.256 13:12:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.257 ************************************ 00:31:48.257 START TEST nvmf_interrupt 00:31:48.257 ************************************ 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:48.257 * Looking for test storage... 00:31:48.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.257 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.517 --rc genhtml_branch_coverage=1 00:31:48.517 --rc genhtml_function_coverage=1 00:31:48.517 --rc genhtml_legend=1 00:31:48.517 --rc geninfo_all_blocks=1 00:31:48.517 --rc geninfo_unexecuted_blocks=1 00:31:48.517 00:31:48.517 ' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.517 --rc genhtml_branch_coverage=1 00:31:48.517 --rc genhtml_function_coverage=1 00:31:48.517 --rc genhtml_legend=1 00:31:48.517 --rc geninfo_all_blocks=1 00:31:48.517 --rc geninfo_unexecuted_blocks=1 00:31:48.517 00:31:48.517 ' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.517 --rc genhtml_branch_coverage=1 00:31:48.517 --rc genhtml_function_coverage=1 00:31:48.517 --rc genhtml_legend=1 00:31:48.517 --rc geninfo_all_blocks=1 00:31:48.517 --rc geninfo_unexecuted_blocks=1 00:31:48.517 00:31:48.517 ' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.517 --rc genhtml_branch_coverage=1 00:31:48.517 --rc genhtml_function_coverage=1 00:31:48.517 --rc genhtml_legend=1 00:31:48.517 --rc geninfo_all_blocks=1 00:31:48.517 --rc geninfo_unexecuted_blocks=1 00:31:48.517 00:31:48.517 ' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.517 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.518 13:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.090 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.091 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.091 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.091 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.091 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:31:55.091 00:31:55.091 --- 10.0.0.2 ping statistics --- 00:31:55.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.091 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:31:55.091 00:31:55.091 --- 10.0.0.1 ping statistics --- 00:31:55.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.091 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1451689 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1451689 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1451689 ']' 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.091 [2024-10-15 13:12:14.613684] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.091 [2024-10-15 13:12:14.614590] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:31:55.091 [2024-10-15 13:12:14.614629] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.091 [2024-10-15 13:12:14.684816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:55.091 [2024-10-15 13:12:14.725555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.091 [2024-10-15 13:12:14.725589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.091 [2024-10-15 13:12:14.725596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.091 [2024-10-15 13:12:14.725606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.091 [2024-10-15 13:12:14.725612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.091 [2024-10-15 13:12:14.726791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.091 [2024-10-15 13:12:14.726792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.091 [2024-10-15 13:12:14.792415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.091 [2024-10-15 13:12:14.793033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.091 [2024-10-15 13:12:14.793212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:55.091 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:55.091 5000+0 records in 00:31:55.091 5000+0 records out 00:31:55.092 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184333 s, 556 MB/s 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.092 AIO0 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.092 [2024-10-15 13:12:14.935582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.092 [2024-10-15 13:12:14.975944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1451689 0 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 0 idle 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:31:55.092 13:12:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451689 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451689 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1451689 1 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 1 idle 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451716 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451716 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1451947 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1451689 0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1451689 0 busy 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:31:55.092 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451689 root 20 0 128.2g 46848 33792 R 86.7 0.0 0:00.37 reactor_0' 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451689 root 20 0 128.2g 46848 33792 R 86.7 0.0 0:00.37 reactor_0 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1451689 1 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1451689 1 busy 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:31:55.352 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451716 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.24 reactor_1' 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451716 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.24 reactor_1 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.611 13:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1451947 00:32:05.695 Initializing NVMe Controllers 00:32:05.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:05.695 Controller IO queue size 256, less than required. 00:32:05.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:05.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:05.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:05.695 Initialization complete. Launching workers. 00:32:05.695 ======================================================== 00:32:05.695 Latency(us) 00:32:05.695 Device Information : IOPS MiB/s Average min max 00:32:05.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16986.10 66.35 15078.40 2959.86 30016.82 00:32:05.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16477.70 64.37 15542.76 7746.46 56420.82 00:32:05.695 ======================================================== 00:32:05.695 Total : 33463.80 130.72 15307.05 2959.86 56420.82 00:32:05.695 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1451689 0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 0 idle 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451689 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451689 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1451689 1 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 1 idle 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451716 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451716 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.695 13:12:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:06.263 13:12:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:06.263 13:12:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:32:06.263 13:12:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:06.263 13:12:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:06.263 13:12:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1451689 0 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 0 idle 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:32:08.168 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451689 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.48 reactor_0' 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451689 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.48 reactor_0 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1451689 1 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1451689 1 idle 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1451689 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1451689 -w 256 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.427 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1451716 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1451716 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:08.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.687 rmmod nvme_tcp 00:32:08.687 rmmod nvme_fabrics 00:32:08.687 rmmod nvme_keyring 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1451689 ']' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1451689 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1451689 ']' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1451689 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.687 13:12:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1451689 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1451689' 00:32:08.946 killing process with pid 1451689 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1451689 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1451689 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:08.946 13:12:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.483 13:12:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.483 00:32:11.483 real 0m22.863s 00:32:11.483 user 0m39.662s 00:32:11.483 sys 0m8.460s 00:32:11.483 13:12:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.483 13:12:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.483 ************************************ 00:32:11.483 END TEST nvmf_interrupt 00:32:11.483 ************************************ 00:32:11.483 00:32:11.483 real 27m3.192s 00:32:11.483 user 55m47.898s 00:32:11.483 sys 9m12.715s 00:32:11.483 13:12:31 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.483 13:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.483 ************************************ 00:32:11.483 END TEST nvmf_tcp 00:32:11.483 ************************************ 00:32:11.483 13:12:31 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:32:11.483 13:12:31 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:11.483 13:12:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:11.483 13:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.483 13:12:31 -- common/autotest_common.sh@10 -- # set +x 00:32:11.483 ************************************ 00:32:11.483 START TEST spdkcli_nvmf_tcp 00:32:11.483 ************************************ 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:11.483 * Looking for test storage... 00:32:11.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.483 --rc genhtml_branch_coverage=1 00:32:11.483 --rc genhtml_function_coverage=1 00:32:11.483 --rc genhtml_legend=1 00:32:11.483 --rc geninfo_all_blocks=1 00:32:11.483 --rc geninfo_unexecuted_blocks=1 00:32:11.483 00:32:11.483 ' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.483 --rc genhtml_branch_coverage=1 00:32:11.483 --rc genhtml_function_coverage=1 00:32:11.483 --rc genhtml_legend=1 00:32:11.483 --rc geninfo_all_blocks=1 00:32:11.483 --rc geninfo_unexecuted_blocks=1 00:32:11.483 00:32:11.483 ' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.483 --rc genhtml_branch_coverage=1 00:32:11.483 --rc genhtml_function_coverage=1 00:32:11.483 --rc genhtml_legend=1 00:32:11.483 --rc geninfo_all_blocks=1 00:32:11.483 --rc geninfo_unexecuted_blocks=1 00:32:11.483 00:32:11.483 ' 00:32:11.483 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.484 --rc genhtml_branch_coverage=1 00:32:11.484 --rc genhtml_function_coverage=1 00:32:11.484 --rc genhtml_legend=1 00:32:11.484 --rc geninfo_all_blocks=1 00:32:11.484 --rc geninfo_unexecuted_blocks=1 00:32:11.484 00:32:11.484 ' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:11.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1454637 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1454637 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1454637 ']' 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:11.484 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.484 [2024-10-15 13:12:31.647261] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:32:11.484 [2024-10-15 13:12:31.647306] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454637 ] 00:32:11.484 [2024-10-15 13:12:31.716091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:11.484 [2024-10-15 13:12:31.756505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.484 [2024-10-15 13:12:31.756504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.742 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.743 13:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:11.743 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:11.743 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:11.743 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:11.743 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:11.743 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:11.743 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:11.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:11.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:11.743 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:11.743 ' 00:32:14.276 [2024-10-15 13:12:34.580916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.653 [2024-10-15 13:12:35.917381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:18.188 [2024-10-15 13:12:38.392928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:20.724 [2024-10-15 13:12:40.539843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:22.101 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:22.101 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:22.101 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.101 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.101 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:22.101 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:22.101 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:22.101 13:12:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.672 13:12:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:22.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:22.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:22.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:22.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:22.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:22.672 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:22.672 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:22.672 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:22.672 ' 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:29.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:29.239 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:29.239 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:29.239 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:29.239 13:12:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:29.239 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:29.239 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1454637 ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1454637' 00:32:29.240 killing process with pid 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1454637 ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1454637 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1454637 ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1454637 00:32:29.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1454637) - No such process 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1454637 is not found' 00:32:29.240 Process with pid 1454637 is not found 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:29.240 00:32:29.240 real 0m17.299s 00:32:29.240 user 0m38.124s 00:32:29.240 sys 0m0.788s 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.240 13:12:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.240 ************************************ 00:32:29.240 END TEST spdkcli_nvmf_tcp 00:32:29.240 ************************************ 00:32:29.240 13:12:48 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:29.240 13:12:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:29.240 13:12:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:29.240 13:12:48 -- common/autotest_common.sh@10 -- # set +x 00:32:29.240 ************************************ 00:32:29.240 START TEST nvmf_identify_passthru 00:32:29.240 ************************************ 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:29.240 * Looking for test storage... 00:32:29.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.240 --rc genhtml_branch_coverage=1 00:32:29.240 --rc genhtml_function_coverage=1 00:32:29.240 --rc genhtml_legend=1 00:32:29.240 --rc geninfo_all_blocks=1 00:32:29.240 --rc geninfo_unexecuted_blocks=1 00:32:29.240 00:32:29.240 ' 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.240 --rc genhtml_branch_coverage=1 00:32:29.240 --rc genhtml_function_coverage=1 00:32:29.240 --rc genhtml_legend=1 00:32:29.240 --rc geninfo_all_blocks=1 00:32:29.240 --rc geninfo_unexecuted_blocks=1 00:32:29.240 00:32:29.240 ' 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.240 --rc genhtml_branch_coverage=1 00:32:29.240 --rc genhtml_function_coverage=1 00:32:29.240 --rc genhtml_legend=1 00:32:29.240 --rc geninfo_all_blocks=1 00:32:29.240 --rc geninfo_unexecuted_blocks=1 00:32:29.240 00:32:29.240 ' 00:32:29.240 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.240 --rc genhtml_branch_coverage=1 00:32:29.240 --rc genhtml_function_coverage=1 00:32:29.240 --rc genhtml_legend=1 00:32:29.240 --rc geninfo_all_blocks=1 00:32:29.240 --rc geninfo_unexecuted_blocks=1 00:32:29.240 00:32:29.240 ' 00:32:29.240 13:12:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.240 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.240 13:12:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.240 13:12:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.240 13:12:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.240 13:12:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.240 13:12:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:29.240 13:12:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:29.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.241 13:12:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.241 13:12:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.241 13:12:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.241 13:12:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.241 13:12:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.241 13:12:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.241 13:12:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.241 13:12:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.241 13:12:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:29.241 13:12:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.241 13:12:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.241 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:29.241 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:29.241 13:12:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.241 13:12:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:34.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:34.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:34.514 Found net devices under 0000:86:00.0: cvl_0_0 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:34.514 Found net devices under 0000:86:00.1: cvl_0_1 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.514 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.515 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:32:34.775 00:32:34.775 --- 10.0.0.2 ping statistics --- 00:32:34.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.775 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:34.775 00:32:34.775 --- 10.0.0.1 ping statistics --- 00:32:34.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.775 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:34.775 13:12:54 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:32:34.775 13:12:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:34.775 13:12:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:40.048 13:12:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:32:40.048 13:12:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:40.048 13:12:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:40.048 13:12:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1462116 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:44.240 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1462116 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1462116 ']' 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:44.240 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.240 [2024-10-15 13:13:04.467186] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:32:44.240 [2024-10-15 13:13:04.467234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.240 [2024-10-15 13:13:04.540143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:44.499 [2024-10-15 13:13:04.582987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.499 [2024-10-15 13:13:04.583021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.499 [2024-10-15 13:13:04.583028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.499 [2024-10-15 13:13:04.583034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.499 [2024-10-15 13:13:04.583039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.499 [2024-10-15 13:13:04.584546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.499 [2024-10-15 13:13:04.584656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.499 [2024-10-15 13:13:04.584764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.499 [2024-10-15 13:13:04.584765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:32:44.499 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.499 INFO: Log level set to 20 00:32:44.499 INFO: Requests: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "method": "nvmf_set_config", 00:32:44.499 "id": 1, 00:32:44.499 "params": { 00:32:44.499 "admin_cmd_passthru": { 00:32:44.499 "identify_ctrlr": true 00:32:44.499 } 00:32:44.499 } 00:32:44.499 } 00:32:44.499 00:32:44.499 INFO: response: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "id": 1, 00:32:44.499 "result": true 00:32:44.499 } 00:32:44.499 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.499 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.499 INFO: Setting log level to 20 00:32:44.499 INFO: Setting log level to 20 00:32:44.499 INFO: Log level set to 20 00:32:44.499 INFO: Log level set to 20 00:32:44.499 INFO: Requests: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "method": "framework_start_init", 00:32:44.499 "id": 1 00:32:44.499 } 00:32:44.499 00:32:44.499 INFO: Requests: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "method": "framework_start_init", 00:32:44.499 "id": 1 00:32:44.499 } 00:32:44.499 00:32:44.499 [2024-10-15 13:13:04.691344] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:44.499 INFO: response: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "id": 1, 00:32:44.499 "result": true 00:32:44.499 } 00:32:44.499 00:32:44.499 INFO: response: 00:32:44.499 { 00:32:44.499 "jsonrpc": "2.0", 00:32:44.499 "id": 1, 00:32:44.499 "result": true 00:32:44.499 } 00:32:44.499 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.499 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.499 INFO: Setting log level to 40 00:32:44.499 INFO: Setting log level to 40 00:32:44.499 INFO: Setting log level to 40 00:32:44.499 [2024-10-15 13:13:04.704673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.499 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.500 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:44.500 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:44.500 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.500 13:13:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:44.500 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.500 13:13:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.789 Nvme0n1 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.789 [2024-10-15 13:13:07.616718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.789 [ 00:32:47.789 { 00:32:47.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:47.789 "subtype": "Discovery", 00:32:47.789 "listen_addresses": [], 00:32:47.789 "allow_any_host": true, 00:32:47.789 "hosts": [] 00:32:47.789 }, 00:32:47.789 { 00:32:47.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:47.789 "subtype": "NVMe", 00:32:47.789 "listen_addresses": [ 00:32:47.789 { 00:32:47.789 "trtype": "TCP", 00:32:47.789 "adrfam": "IPv4", 00:32:47.789 "traddr": "10.0.0.2", 00:32:47.789 "trsvcid": "4420" 00:32:47.789 } 00:32:47.789 ], 00:32:47.789 "allow_any_host": true, 00:32:47.789 "hosts": [], 00:32:47.789 "serial_number": "SPDK00000000000001", 00:32:47.789 "model_number": "SPDK bdev Controller", 00:32:47.789 "max_namespaces": 1, 00:32:47.789 "min_cntlid": 1, 00:32:47.789 "max_cntlid": 65519, 00:32:47.789 "namespaces": [ 00:32:47.789 { 00:32:47.789 "nsid": 1, 00:32:47.789 "bdev_name": "Nvme0n1", 00:32:47.789 "name": "Nvme0n1", 00:32:47.789 "nguid": "999CD2ACAE654FCCBBCBD9B9B57F0D61", 00:32:47.789 "uuid": "999cd2ac-ae65-4fcc-bbcb-d9b9b57f0d61" 00:32:47.789 } 00:32:47.789 ] 00:32:47.789 } 00:32:47.789 ] 00:32:47.789 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:32:47.789 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:47.790 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.790 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.790 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.790 13:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.790 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:47.790 13:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.790 13:13:07 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.790 rmmod nvme_tcp 00:32:47.790 rmmod nvme_fabrics 00:32:47.790 rmmod nvme_keyring 00:32:47.790 13:13:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.790 13:13:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:47.790 13:13:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:47.790 13:13:08 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1462116 ']' 00:32:47.790 13:13:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1462116 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1462116 ']' 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1462116 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462116 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462116' 00:32:47.790 killing process with pid 1462116 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1462116 00:32:47.790 13:13:08 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1462116 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.323 13:13:10 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.323 13:13:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:50.323 13:13:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.230 13:13:12 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.230 00:32:52.230 real 0m23.399s 00:32:52.230 user 0m29.627s 00:32:52.230 sys 0m6.297s 00:32:52.230 13:13:12 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.230 13:13:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 ************************************ 00:32:52.230 END TEST nvmf_identify_passthru 00:32:52.230 ************************************ 00:32:52.230 13:13:12 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.230 13:13:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:52.230 13:13:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:52.230 13:13:12 -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 ************************************ 00:32:52.230 START TEST nvmf_dif 00:32:52.230 ************************************ 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.230 * Looking for test storage... 00:32:52.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.230 13:13:12 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:52.230 13:13:12 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.231 --rc genhtml_branch_coverage=1 00:32:52.231 --rc genhtml_function_coverage=1 00:32:52.231 --rc genhtml_legend=1 00:32:52.231 --rc geninfo_all_blocks=1 00:32:52.231 --rc geninfo_unexecuted_blocks=1 00:32:52.231 00:32:52.231 ' 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.231 --rc genhtml_branch_coverage=1 00:32:52.231 --rc genhtml_function_coverage=1 00:32:52.231 --rc genhtml_legend=1 00:32:52.231 --rc geninfo_all_blocks=1 00:32:52.231 --rc geninfo_unexecuted_blocks=1 00:32:52.231 00:32:52.231 ' 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.231 --rc genhtml_branch_coverage=1 00:32:52.231 --rc genhtml_function_coverage=1 00:32:52.231 --rc genhtml_legend=1 00:32:52.231 --rc geninfo_all_blocks=1 00:32:52.231 --rc geninfo_unexecuted_blocks=1 00:32:52.231 00:32:52.231 ' 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.231 --rc genhtml_branch_coverage=1 00:32:52.231 --rc genhtml_function_coverage=1 00:32:52.231 --rc genhtml_legend=1 00:32:52.231 --rc geninfo_all_blocks=1 00:32:52.231 --rc geninfo_unexecuted_blocks=1 00:32:52.231 00:32:52.231 ' 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.231 13:13:12 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.231 13:13:12 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.231 13:13:12 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.231 13:13:12 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.231 13:13:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.231 13:13:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.231 13:13:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.231 13:13:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:52.231 13:13:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:52.231 13:13:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:52.231 13:13:12 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.231 13:13:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:58.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:58.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.799 13:13:17 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:58.799 Found net devices under 0000:86:00.0: cvl_0_0 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:58.799 Found net devices under 0000:86:00.1: cvl_0_1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.799 13:13:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:32:58.800 00:32:58.800 --- 10.0.0.2 ping statistics --- 00:32:58.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.800 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:32:58.800 00:32:58.800 --- 10.0.0.1 ping statistics --- 00:32:58.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.800 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:32:58.800 13:13:18 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.704 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:00.704 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.704 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:00.963 13:13:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:00.963 13:13:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1467584 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1467584 00:33:00.963 13:13:21 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1467584 ']' 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.963 13:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.963 [2024-10-15 13:13:21.218302] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:33:00.963 [2024-10-15 13:13:21.218346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.222 [2024-10-15 13:13:21.288810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.222 [2024-10-15 13:13:21.329757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.222 [2024-10-15 13:13:21.329789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.222 [2024-10-15 13:13:21.329797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.222 [2024-10-15 13:13:21.329803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.222 [2024-10-15 13:13:21.329808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.222 [2024-10-15 13:13:21.330337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:33:01.222 13:13:21 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 13:13:21 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.222 13:13:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:01.222 13:13:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 [2024-10-15 13:13:21.466418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.222 13:13:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 ************************************ 00:33:01.222 START TEST fio_dif_1_default 00:33:01.222 ************************************ 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 bdev_null0 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.222 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 [2024-10-15 13:13:21.542759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:01.481 { 00:33:01.481 "params": { 00:33:01.481 "name": "Nvme$subsystem", 00:33:01.481 "trtype": "$TEST_TRANSPORT", 00:33:01.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.481 "adrfam": "ipv4", 00:33:01.481 "trsvcid": "$NVMF_PORT", 00:33:01.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.481 "hdgst": ${hdgst:-false}, 00:33:01.481 "ddgst": ${ddgst:-false} 00:33:01.481 }, 00:33:01.481 "method": "bdev_nvme_attach_controller" 00:33:01.481 } 00:33:01.481 EOF 00:33:01.481 )") 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:01.481 "params": { 00:33:01.481 "name": "Nvme0", 00:33:01.481 "trtype": "tcp", 00:33:01.481 "traddr": "10.0.0.2", 00:33:01.481 "adrfam": "ipv4", 00:33:01.481 "trsvcid": "4420", 00:33:01.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.481 "hdgst": false, 00:33:01.481 "ddgst": false 00:33:01.481 }, 00:33:01.481 "method": "bdev_nvme_attach_controller" 00:33:01.481 }' 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:01.481 13:13:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.740 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:01.740 fio-3.35 00:33:01.740 Starting 1 thread 00:33:13.945 00:33:13.945 filename0: (groupid=0, jobs=1): err= 0: pid=1467954: Tue Oct 15 13:13:32 2024 00:33:13.945 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:33:13.945 slat (nsec): min=5825, max=26265, avg=6309.15, stdev=1522.85 00:33:13.945 clat (usec): min=40749, max=44045, avg=41032.18, stdev=275.07 00:33:13.945 lat (usec): min=40755, max=44071, avg=41038.49, stdev=275.39 00:33:13.945 clat percentiles (usec): 00:33:13.945 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:13.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:13.945 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:13.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:13.945 | 99.99th=[44303] 00:33:13.945 bw ( KiB/s): min= 384, max= 416, per=99.55%, avg=388.80, stdev=11.72, samples=20 00:33:13.945 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:13.945 lat (msec) : 50=100.00% 00:33:13.945 cpu : usr=92.53%, sys=7.21%, ctx=14, majf=0, minf=0 00:33:13.945 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:13.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.945 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.945 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:13.945 00:33:13.945 Run status group 0 (all jobs): 00:33:13.945 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10017-10017msec 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.945 00:33:13.945 real 0m11.055s 00:33:13.945 user 0m15.873s 00:33:13.945 sys 0m1.010s 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:13.945 ************************************ 00:33:13.945 END TEST fio_dif_1_default 00:33:13.945 ************************************ 00:33:13.945 13:13:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:13.945 13:13:32 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:13.945 13:13:32 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:13.945 13:13:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.945 ************************************ 00:33:13.945 START TEST fio_dif_1_multi_subsystems 00:33:13.945 ************************************ 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.945 bdev_null0 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:13.945 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 [2024-10-15 13:13:32.674081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 bdev_null1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:13.946 { 00:33:13.946 "params": { 00:33:13.946 "name": "Nvme$subsystem", 00:33:13.946 "trtype": "$TEST_TRANSPORT", 00:33:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:13.946 "adrfam": "ipv4", 00:33:13.946 "trsvcid": "$NVMF_PORT", 00:33:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:13.946 "hdgst": ${hdgst:-false}, 00:33:13.946 "ddgst": ${ddgst:-false} 00:33:13.946 }, 00:33:13.946 "method": "bdev_nvme_attach_controller" 00:33:13.946 } 00:33:13.946 EOF 00:33:13.946 )") 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:13.946 { 00:33:13.946 "params": { 00:33:13.946 "name": "Nvme$subsystem", 00:33:13.946 "trtype": "$TEST_TRANSPORT", 00:33:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:13.946 "adrfam": "ipv4", 00:33:13.946 "trsvcid": "$NVMF_PORT", 00:33:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:13.946 "hdgst": ${hdgst:-false}, 00:33:13.946 "ddgst": ${ddgst:-false} 00:33:13.946 }, 00:33:13.946 "method": "bdev_nvme_attach_controller" 00:33:13.946 } 00:33:13.946 EOF 00:33:13.946 )") 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:13.946 "params": { 00:33:13.946 "name": "Nvme0", 00:33:13.946 "trtype": "tcp", 00:33:13.946 "traddr": "10.0.0.2", 00:33:13.946 "adrfam": "ipv4", 00:33:13.946 "trsvcid": "4420", 00:33:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.946 "hdgst": false, 00:33:13.946 "ddgst": false 00:33:13.946 }, 00:33:13.946 "method": "bdev_nvme_attach_controller" 00:33:13.946 },{ 00:33:13.946 "params": { 00:33:13.946 "name": "Nvme1", 00:33:13.946 "trtype": "tcp", 00:33:13.946 "traddr": "10.0.0.2", 00:33:13.946 "adrfam": "ipv4", 00:33:13.946 "trsvcid": "4420", 00:33:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:13.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:13.946 "hdgst": false, 00:33:13.946 "ddgst": false 00:33:13.946 }, 00:33:13.946 "method": "bdev_nvme_attach_controller" 00:33:13.946 }' 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:13.946 13:13:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:13.946 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:13.946 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:13.946 fio-3.35 00:33:13.946 Starting 2 threads 00:33:24.023 00:33:24.023 filename0: (groupid=0, jobs=1): err= 0: pid=1469920: Tue Oct 15 13:13:43 2024 00:33:24.023 read: IOPS=198, BW=793KiB/s (812kB/s)(7936KiB/10010msec) 00:33:24.023 slat (nsec): min=5922, max=31672, avg=6998.29, stdev=1941.85 00:33:24.023 clat (usec): min=377, max=42591, avg=20161.64, stdev=20443.61 00:33:24.023 lat (usec): min=383, max=42598, avg=20168.64, stdev=20443.07 00:33:24.023 clat percentiles (usec): 00:33:24.023 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:33:24.023 | 30.00th=[ 424], 40.00th=[ 486], 50.00th=[ 619], 60.00th=[40633], 00:33:24.023 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:24.023 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:24.023 | 99.99th=[42730] 00:33:24.023 bw ( KiB/s): min= 670, max= 896, per=66.90%, avg=791.90, stdev=56.02, samples=20 00:33:24.023 iops : min= 167, max= 224, avg=197.95, stdev=14.06, samples=20 00:33:24.023 lat (usec) : 500=43.40%, 750=8.22%, 1000=0.20% 00:33:24.023 lat (msec) : 50=48.19% 00:33:24.023 cpu : usr=96.24%, sys=3.52%, ctx=13, majf=0, minf=99 00:33:24.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.023 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:24.023 filename1: (groupid=0, jobs=1): err= 0: pid=1469921: Tue Oct 15 13:13:43 2024 00:33:24.023 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:33:24.023 slat (nsec): min=5916, max=30513, avg=7652.02, stdev=2494.37 00:33:24.023 clat (usec): min=40757, max=42001, avg=41018.49, stdev=201.34 00:33:24.023 lat (usec): min=40764, max=42031, avg=41026.14, stdev=201.74 00:33:24.023 clat percentiles (usec): 00:33:24.023 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:24.023 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:24.023 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:24.023 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:24.023 | 99.99th=[42206] 00:33:24.023 bw ( KiB/s): min= 383, max= 416, per=32.82%, avg=388.75, stdev=11.75, samples=20 00:33:24.023 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:33:24.023 lat (msec) : 50=100.00% 00:33:24.023 cpu : usr=96.95%, sys=2.81%, ctx=11, majf=0, minf=44 00:33:24.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.023 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:24.023 00:33:24.023 Run status group 0 (all jobs): 00:33:24.023 READ: bw=1182KiB/s (1211kB/s), 390KiB/s-793KiB/s (399kB/s-812kB/s), io=11.6MiB (12.1MB), run=10010-10014msec 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.023 00:33:24.023 real 0m11.328s 00:33:24.023 user 0m26.722s 00:33:24.023 sys 0m0.961s 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.023 13:13:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 ************************************ 00:33:24.023 END TEST fio_dif_1_multi_subsystems 00:33:24.023 ************************************ 00:33:24.023 13:13:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:24.023 13:13:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:24.023 13:13:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.023 13:13:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 ************************************ 00:33:24.023 START TEST fio_dif_rand_params 00:33:24.023 ************************************ 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.023 bdev_null0 00:33:24.023 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.024 [2024-10-15 13:13:44.073265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:24.024 { 00:33:24.024 "params": { 00:33:24.024 "name": "Nvme$subsystem", 00:33:24.024 "trtype": "$TEST_TRANSPORT", 00:33:24.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.024 "adrfam": "ipv4", 00:33:24.024 "trsvcid": "$NVMF_PORT", 00:33:24.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.024 "hdgst": ${hdgst:-false}, 00:33:24.024 "ddgst": ${ddgst:-false} 00:33:24.024 }, 00:33:24.024 "method": "bdev_nvme_attach_controller" 00:33:24.024 } 00:33:24.024 EOF 00:33:24.024 )") 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:24.024 "params": { 00:33:24.024 "name": "Nvme0", 00:33:24.024 "trtype": "tcp", 00:33:24.024 "traddr": "10.0.0.2", 00:33:24.024 "adrfam": "ipv4", 00:33:24.024 "trsvcid": "4420", 00:33:24.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.024 "hdgst": false, 00:33:24.024 "ddgst": false 00:33:24.024 }, 00:33:24.024 "method": "bdev_nvme_attach_controller" 00:33:24.024 }' 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:24.024 13:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.283 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:24.283 ... 00:33:24.283 fio-3.35 00:33:24.283 Starting 3 threads 00:33:30.851 00:33:30.851 filename0: (groupid=0, jobs=1): err= 0: pid=1471810: Tue Oct 15 13:13:49 2024 00:33:30.851 read: IOPS=329, BW=41.2MiB/s (43.2MB/s)(208MiB/5045msec) 00:33:30.851 slat (nsec): min=6231, max=27824, avg=10741.55, stdev=1642.40 00:33:30.851 clat (usec): min=4943, max=49997, avg=9073.22, stdev=3325.59 00:33:30.851 lat (usec): min=4950, max=50007, avg=9083.96, stdev=3325.70 00:33:30.851 clat percentiles (usec): 00:33:30.851 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7832], 00:33:30.851 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:33:30.851 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10552], 00:33:30.851 | 99.00th=[11600], 99.50th=[46400], 99.90th=[48497], 99.95th=[50070], 00:33:30.851 | 99.99th=[50070] 00:33:30.851 bw ( KiB/s): min=38144, max=45312, per=34.44%, avg=42470.40, stdev=2133.16, samples=10 00:33:30.851 iops : min= 298, max= 354, avg=331.80, stdev=16.67, samples=10 00:33:30.851 lat (msec) : 10=86.39%, 20=12.94%, 50=0.66% 00:33:30.851 cpu : usr=94.94%, sys=4.78%, ctx=6, majf=0, minf=0 00:33:30.851 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 issued rwts: total=1661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.851 filename0: (groupid=0, jobs=1): err= 0: pid=1471811: Tue Oct 15 13:13:49 2024 00:33:30.851 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5044msec) 00:33:30.851 slat (nsec): min=6320, max=26539, avg=10954.28, stdev=1683.32 00:33:30.851 clat (usec): min=3541, max=48065, avg=9622.87, stdev=2997.57 00:33:30.851 lat (usec): min=3548, max=48078, avg=9633.82, stdev=2997.87 00:33:30.851 clat percentiles (usec): 00:33:30.851 | 1.00th=[ 5800], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8455], 00:33:30.851 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9896], 00:33:30.851 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:33:30.851 | 99.00th=[12256], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:33:30.851 | 99.99th=[47973] 00:33:30.851 bw ( KiB/s): min=36864, max=42496, per=32.47%, avg=40038.40, stdev=1711.78, samples=10 00:33:30.851 iops : min= 288, max= 332, avg=312.80, stdev=13.37, samples=10 00:33:30.851 lat (msec) : 4=0.26%, 10=64.11%, 20=35.12%, 50=0.51% 00:33:30.851 cpu : usr=93.99%, sys=5.73%, ctx=8, majf=0, minf=11 00:33:30.851 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.851 filename0: (groupid=0, jobs=1): err= 0: pid=1471812: Tue Oct 15 13:13:49 2024 00:33:30.851 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(204MiB/5004msec) 00:33:30.851 slat (nsec): min=6283, max=26606, avg=10864.75, stdev=1827.34 00:33:30.851 clat (usec): min=3631, max=49407, avg=9178.97, stdev=3089.69 00:33:30.851 lat (usec): min=3642, max=49415, avg=9189.84, stdev=3089.54 00:33:30.851 clat percentiles (usec): 00:33:30.851 | 1.00th=[ 5342], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 7963], 00:33:30.851 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:33:30.851 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[10814], 00:33:30.851 | 99.00th=[11731], 99.50th=[45351], 99.90th=[48497], 99.95th=[49546], 00:33:30.851 | 99.99th=[49546] 00:33:30.851 bw ( KiB/s): min=36864, max=46592, per=33.63%, avg=41472.00, stdev=2873.59, samples=9 00:33:30.851 iops : min= 288, max= 364, avg=324.00, stdev=22.45, samples=9 00:33:30.851 lat (msec) : 4=0.55%, 10=78.75%, 20=20.15%, 50=0.55% 00:33:30.851 cpu : usr=94.26%, sys=5.46%, ctx=9, majf=0, minf=9 00:33:30.851 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.851 issued rwts: total=1633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.851 00:33:30.851 Run status group 0 (all jobs): 00:33:30.851 READ: bw=120MiB/s (126MB/s), 38.8MiB/s-41.2MiB/s (40.7MB/s-43.2MB/s), io=608MiB (637MB), run=5004-5045msec 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 bdev_null0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 [2024-10-15 13:13:50.180503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 bdev_null1 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 bdev_null2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:30.852 { 00:33:30.852 "params": { 00:33:30.852 "name": "Nvme$subsystem", 00:33:30.852 "trtype": "$TEST_TRANSPORT", 00:33:30.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.852 "adrfam": "ipv4", 00:33:30.852 "trsvcid": "$NVMF_PORT", 00:33:30.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.852 "hdgst": ${hdgst:-false}, 00:33:30.852 "ddgst": ${ddgst:-false} 00:33:30.852 }, 00:33:30.852 "method": "bdev_nvme_attach_controller" 00:33:30.852 } 00:33:30.852 EOF 00:33:30.852 )") 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:30.852 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:30.852 { 00:33:30.852 "params": { 00:33:30.852 "name": "Nvme$subsystem", 00:33:30.852 "trtype": "$TEST_TRANSPORT", 00:33:30.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.853 "adrfam": "ipv4", 00:33:30.853 "trsvcid": "$NVMF_PORT", 00:33:30.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.853 "hdgst": ${hdgst:-false}, 00:33:30.853 "ddgst": ${ddgst:-false} 00:33:30.853 }, 00:33:30.853 "method": "bdev_nvme_attach_controller" 00:33:30.853 } 00:33:30.853 EOF 00:33:30.853 )") 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:30.853 { 00:33:30.853 "params": { 00:33:30.853 "name": "Nvme$subsystem", 00:33:30.853 "trtype": "$TEST_TRANSPORT", 00:33:30.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.853 "adrfam": "ipv4", 00:33:30.853 "trsvcid": "$NVMF_PORT", 00:33:30.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.853 "hdgst": ${hdgst:-false}, 00:33:30.853 "ddgst": ${ddgst:-false} 00:33:30.853 }, 00:33:30.853 "method": "bdev_nvme_attach_controller" 00:33:30.853 } 00:33:30.853 EOF 00:33:30.853 )") 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:30.853 "params": { 00:33:30.853 "name": "Nvme0", 00:33:30.853 "trtype": "tcp", 00:33:30.853 "traddr": "10.0.0.2", 00:33:30.853 "adrfam": "ipv4", 00:33:30.853 "trsvcid": "4420", 00:33:30.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:30.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:30.853 "hdgst": false, 00:33:30.853 "ddgst": false 00:33:30.853 }, 00:33:30.853 "method": "bdev_nvme_attach_controller" 00:33:30.853 },{ 00:33:30.853 "params": { 00:33:30.853 "name": "Nvme1", 00:33:30.853 "trtype": "tcp", 00:33:30.853 "traddr": "10.0.0.2", 00:33:30.853 "adrfam": "ipv4", 00:33:30.853 "trsvcid": "4420", 00:33:30.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.853 "hdgst": false, 00:33:30.853 "ddgst": false 00:33:30.853 }, 00:33:30.853 "method": "bdev_nvme_attach_controller" 00:33:30.853 },{ 00:33:30.853 "params": { 00:33:30.853 "name": "Nvme2", 00:33:30.853 "trtype": "tcp", 00:33:30.853 "traddr": "10.0.0.2", 00:33:30.853 "adrfam": "ipv4", 00:33:30.853 "trsvcid": "4420", 00:33:30.853 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:30.853 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:30.853 "hdgst": false, 00:33:30.853 "ddgst": false 00:33:30.853 }, 00:33:30.853 "method": "bdev_nvme_attach_controller" 00:33:30.853 }' 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:30.853 13:13:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.853 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.853 ... 00:33:30.853 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.853 ... 00:33:30.853 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.853 ... 00:33:30.853 fio-3.35 00:33:30.853 Starting 24 threads 00:33:43.060 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472933: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.8MiB/10152msec) 00:33:43.060 slat (nsec): min=7400, max=80330, avg=25728.85, stdev=16570.78 00:33:43.060 clat (msec): min=14, max=174, avg=30.28, stdev= 6.77 00:33:43.060 lat (msec): min=14, max=174, avg=30.31, stdev= 6.77 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.060 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 36], 99.50th=[ 51], 99.90th=[ 144], 99.95th=[ 144], 00:33:43.060 | 99.99th=[ 176] 00:33:43.060 bw ( KiB/s): min= 2032, max= 2224, per=4.23%, avg=2118.40, stdev=59.51, samples=20 00:33:43.060 iops : min= 508, max= 556, avg=529.60, stdev=14.88, samples=20 00:33:43.060 lat (msec) : 20=0.60%, 50=98.95%, 100=0.15%, 250=0.30% 00:33:43.060 cpu : usr=98.47%, sys=1.14%, ctx=11, majf=0, minf=9 00:33:43.060 IO depths : 1=1.2%, 2=7.3%, 4=24.5%, 8=55.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:43.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472934: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10119msec) 00:33:43.060 slat (nsec): min=7916, max=94519, avg=40713.83, stdev=16128.06 00:33:43.060 clat (msec): min=28, max=174, avg=30.39, stdev= 8.05 00:33:43.060 lat (msec): min=28, max=174, avg=30.43, stdev= 8.05 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.060 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 31], 99.50th=[ 53], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.060 | 99.99th=[ 176] 00:33:43.060 bw ( KiB/s): min= 1594, max= 2176, per=4.15%, avg=2082.90, stdev=138.03, samples=20 00:33:43.060 iops : min= 398, max= 544, avg=520.70, stdev=34.60, samples=20 00:33:43.060 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.060 cpu : usr=98.34%, sys=1.05%, ctx=70, majf=0, minf=9 00:33:43.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472935: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.8MiB/10155msec) 00:33:43.060 slat (usec): min=5, max=113, avg=37.52, stdev=24.22 00:33:43.060 clat (msec): min=14, max=176, avg=30.31, stdev= 7.98 00:33:43.060 lat (msec): min=14, max=176, avg=30.35, stdev= 7.98 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.060 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 31], 99.50th=[ 32], 99.90th=[ 174], 99.95th=[ 174], 00:33:43.060 | 99.99th=[ 178] 00:33:43.060 bw ( KiB/s): min= 2048, max= 2176, per=4.23%, avg=2118.40, stdev=65.33, samples=20 00:33:43.060 iops : min= 512, max= 544, avg=529.60, stdev=16.33, samples=20 00:33:43.060 lat (msec) : 20=0.73%, 50=98.96%, 250=0.30% 00:33:43.060 cpu : usr=98.54%, sys=1.08%, ctx=13, majf=0, minf=9 00:33:43.060 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472936: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.8MiB/10152msec) 00:33:43.060 slat (nsec): min=7525, max=80008, avg=10854.90, stdev=4218.21 00:33:43.060 clat (msec): min=14, max=173, avg=30.48, stdev= 7.97 00:33:43.060 lat (msec): min=14, max=173, avg=30.49, stdev= 7.97 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:33:43.060 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 174], 99.95th=[ 174], 00:33:43.060 | 99.99th=[ 174] 00:33:43.060 bw ( KiB/s): min= 2048, max= 2176, per=4.23%, avg=2118.40, stdev=65.33, samples=20 00:33:43.060 iops : min= 512, max= 544, avg=529.60, stdev=16.33, samples=20 00:33:43.060 lat (msec) : 20=0.60%, 50=99.10%, 250=0.30% 00:33:43.060 cpu : usr=98.31%, sys=1.32%, ctx=14, majf=0, minf=9 00:33:43.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472937: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=520, BW=2084KiB/s (2134kB/s)(20.6MiB/10136msec) 00:33:43.060 slat (nsec): min=5949, max=38794, avg=17266.53, stdev=5966.03 00:33:43.060 clat (msec): min=29, max=170, avg=30.55, stdev= 7.75 00:33:43.060 lat (msec): min=29, max=170, avg=30.57, stdev= 7.75 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.060 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 32], 99.50th=[ 42], 99.90th=[ 171], 99.95th=[ 171], 00:33:43.060 | 99.99th=[ 171] 00:33:43.060 bw ( KiB/s): min= 2031, max= 2176, per=4.20%, avg=2104.95, stdev=66.05, samples=20 00:33:43.060 iops : min= 507, max= 544, avg=526.20, stdev=16.56, samples=20 00:33:43.060 lat (msec) : 50=99.70%, 250=0.30% 00:33:43.060 cpu : usr=98.49%, sys=1.13%, ctx=58, majf=0, minf=9 00:33:43.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.060 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.060 filename0: (groupid=0, jobs=1): err= 0: pid=1472938: Tue Oct 15 13:14:01 2024 00:33:43.060 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.9MiB/10131msec) 00:33:43.060 slat (nsec): min=6800, max=57597, avg=14284.80, stdev=5079.72 00:33:43.060 clat (msec): min=4, max=143, avg=30.12, stdev= 6.78 00:33:43.060 lat (msec): min=4, max=143, avg=30.14, stdev= 6.78 00:33:43.060 clat percentiles (msec): 00:33:43.060 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.060 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.060 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.060 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 144], 99.95th=[ 144], 00:33:43.060 | 99.99th=[ 144] 00:33:43.060 bw ( KiB/s): min= 2048, max= 2560, per=4.26%, avg=2137.60, stdev=118.19, samples=20 00:33:43.060 iops : min= 512, max= 640, avg=534.40, stdev=29.55, samples=20 00:33:43.060 lat (msec) : 10=0.90%, 20=0.90%, 50=97.91%, 250=0.30% 00:33:43.060 cpu : usr=98.46%, sys=1.17%, ctx=11, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename0: (groupid=0, jobs=1): err= 0: pid=1472939: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.8MiB/10155msec) 00:33:43.061 slat (nsec): min=7825, max=39204, avg=18942.78, stdev=5149.23 00:33:43.061 clat (msec): min=15, max=170, avg=30.42, stdev= 7.78 00:33:43.061 lat (msec): min=15, max=170, avg=30.44, stdev= 7.78 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.061 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 171], 99.95th=[ 171], 00:33:43.061 | 99.99th=[ 171] 00:33:43.061 bw ( KiB/s): min= 2048, max= 2180, per=4.23%, avg=2118.60, stdev=65.52, samples=20 00:33:43.061 iops : min= 512, max= 545, avg=529.65, stdev=16.38, samples=20 00:33:43.061 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:33:43.061 cpu : usr=98.38%, sys=1.25%, ctx=9, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename0: (groupid=0, jobs=1): err= 0: pid=1472940: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=520, BW=2080KiB/s (2130kB/s)(20.6MiB/10122msec) 00:33:43.061 slat (usec): min=10, max=108, avg=42.11, stdev=23.32 00:33:43.061 clat (msec): min=28, max=175, avg=30.34, stdev= 8.06 00:33:43.061 lat (msec): min=28, max=175, avg=30.38, stdev= 8.06 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.061 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.061 | 99.99th=[ 176] 00:33:43.061 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.80, stdev=138.78, samples=20 00:33:43.061 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.061 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.061 cpu : usr=98.60%, sys=1.04%, ctx=13, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472941: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10120msec) 00:33:43.061 slat (usec): min=7, max=116, avg=41.18, stdev=23.60 00:33:43.061 clat (msec): min=28, max=174, avg=30.33, stdev= 8.05 00:33:43.061 lat (msec): min=28, max=174, avg=30.38, stdev= 8.05 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.061 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.061 | 99.99th=[ 176] 00:33:43.061 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.80, stdev=138.78, samples=20 00:33:43.061 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.061 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.061 cpu : usr=98.51%, sys=1.13%, ctx=6, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472942: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.9MiB/10122msec) 00:33:43.061 slat (usec): min=5, max=113, avg=16.98, stdev=14.12 00:33:43.061 clat (msec): min=10, max=157, avg=30.18, stdev= 7.05 00:33:43.061 lat (msec): min=10, max=157, avg=30.19, stdev= 7.06 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.061 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 40], 99.50th=[ 54], 99.90th=[ 159], 99.95th=[ 159], 00:33:43.061 | 99.99th=[ 159] 00:33:43.061 bw ( KiB/s): min= 1735, max= 2304, per=4.23%, avg=2119.70, stdev=126.64, samples=20 00:33:43.061 iops : min= 433, max= 576, avg=529.85, stdev=31.85, samples=20 00:33:43.061 lat (msec) : 20=1.53%, 50=97.76%, 100=0.41%, 250=0.30% 00:33:43.061 cpu : usr=98.51%, sys=1.13%, ctx=11, majf=0, minf=9 00:33:43.061 IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=81.0%, 16=18.0%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=89.4%, 8=10.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472943: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.8MiB/10155msec) 00:33:43.061 slat (usec): min=7, max=116, avg=27.66, stdev=21.10 00:33:43.061 clat (msec): min=14, max=176, avg=30.39, stdev= 8.01 00:33:43.061 lat (msec): min=14, max=176, avg=30.42, stdev= 8.01 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.061 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 32], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.061 | 99.99th=[ 178] 00:33:43.061 bw ( KiB/s): min= 2048, max= 2176, per=4.23%, avg=2118.40, stdev=65.33, samples=20 00:33:43.061 iops : min= 512, max= 544, avg=529.60, stdev=16.33, samples=20 00:33:43.061 lat (msec) : 20=0.60%, 50=99.10%, 250=0.30% 00:33:43.061 cpu : usr=98.53%, sys=1.09%, ctx=15, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472944: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.9MiB/10130msec) 00:33:43.061 slat (nsec): min=6820, max=41694, avg=18059.01, stdev=4865.41 00:33:43.061 clat (msec): min=4, max=143, avg=30.17, stdev= 6.62 00:33:43.061 lat (msec): min=4, max=143, avg=30.19, stdev= 6.62 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.061 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 32], 99.90th=[ 144], 99.95th=[ 144], 00:33:43.061 | 99.99th=[ 144] 00:33:43.061 bw ( KiB/s): min= 1992, max= 2432, per=4.25%, avg=2128.40, stdev=98.72, samples=20 00:33:43.061 iops : min= 498, max= 608, avg=532.10, stdev=24.68, samples=20 00:33:43.061 lat (msec) : 10=0.30%, 20=1.20%, 50=98.20%, 250=0.30% 00:33:43.061 cpu : usr=98.66%, sys=0.97%, ctx=13, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472945: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=520, BW=2080KiB/s (2130kB/s)(20.6MiB/10123msec) 00:33:43.061 slat (usec): min=6, max=113, avg=41.05, stdev=24.26 00:33:43.061 clat (msec): min=28, max=175, avg=30.33, stdev= 8.07 00:33:43.061 lat (msec): min=28, max=175, avg=30.38, stdev= 8.07 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.061 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 55], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.061 | 99.99th=[ 176] 00:33:43.061 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.65, stdev=138.96, samples=20 00:33:43.061 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.061 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.061 cpu : usr=98.60%, sys=1.02%, ctx=9, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.061 filename1: (groupid=0, jobs=1): err= 0: pid=1472946: Tue Oct 15 13:14:01 2024 00:33:43.061 read: IOPS=520, BW=2084KiB/s (2134kB/s)(20.6MiB/10136msec) 00:33:43.061 slat (usec): min=5, max=113, avg=42.98, stdev=22.55 00:33:43.061 clat (msec): min=26, max=174, avg=30.31, stdev= 7.95 00:33:43.061 lat (msec): min=26, max=174, avg=30.36, stdev= 7.95 00:33:43.061 clat percentiles (msec): 00:33:43.061 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.061 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:43.061 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.061 | 99.00th=[ 31], 99.50th=[ 39], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.061 | 99.99th=[ 176] 00:33:43.061 bw ( KiB/s): min= 2031, max= 2176, per=4.20%, avg=2104.75, stdev=66.23, samples=20 00:33:43.061 iops : min= 507, max= 544, avg=526.15, stdev=16.60, samples=20 00:33:43.061 lat (msec) : 50=99.70%, 250=0.30% 00:33:43.061 cpu : usr=98.48%, sys=1.14%, ctx=13, majf=0, minf=9 00:33:43.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename1: (groupid=0, jobs=1): err= 0: pid=1472947: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.6MiB/10137msec) 00:33:43.062 slat (usec): min=9, max=114, avg=42.33, stdev=23.40 00:33:43.062 clat (msec): min=26, max=174, avg=30.37, stdev= 7.95 00:33:43.062 lat (msec): min=26, max=174, avg=30.41, stdev= 7.95 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 32], 99.50th=[ 41], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 2023, max= 2176, per=4.20%, avg=2104.35, stdev=66.72, samples=20 00:33:43.062 iops : min= 505, max= 544, avg=526.05, stdev=16.73, samples=20 00:33:43.062 lat (msec) : 50=99.70%, 250=0.30% 00:33:43.062 cpu : usr=98.58%, sys=1.05%, ctx=33, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename1: (groupid=0, jobs=1): err= 0: pid=1472948: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10120msec) 00:33:43.062 slat (usec): min=6, max=112, avg=43.28, stdev=23.18 00:33:43.062 clat (msec): min=28, max=174, avg=30.34, stdev= 8.05 00:33:43.062 lat (msec): min=28, max=174, avg=30.38, stdev= 8.05 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.80, stdev=138.78, samples=20 00:33:43.062 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.062 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.062 cpu : usr=98.48%, sys=1.15%, ctx=13, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472949: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10120msec) 00:33:43.062 slat (usec): min=8, max=115, avg=41.37, stdev=23.61 00:33:43.062 clat (msec): min=24, max=174, avg=30.33, stdev= 8.05 00:33:43.062 lat (msec): min=24, max=174, avg=30.38, stdev= 8.05 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.80, stdev=138.78, samples=20 00:33:43.062 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.062 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.062 cpu : usr=98.53%, sys=1.12%, ctx=12, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472950: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.9MiB/10130msec) 00:33:43.062 slat (nsec): min=7313, max=45748, avg=18418.71, stdev=4874.36 00:33:43.062 clat (msec): min=4, max=143, avg=30.16, stdev= 6.62 00:33:43.062 lat (msec): min=4, max=143, avg=30.18, stdev= 6.62 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.062 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 31], 99.50th=[ 32], 99.90th=[ 144], 99.95th=[ 144], 00:33:43.062 | 99.99th=[ 144] 00:33:43.062 bw ( KiB/s): min= 1992, max= 2432, per=4.25%, avg=2128.40, stdev=98.72, samples=20 00:33:43.062 iops : min= 498, max= 608, avg=532.10, stdev=24.68, samples=20 00:33:43.062 lat (msec) : 10=0.30%, 20=1.20%, 50=98.20%, 250=0.30% 00:33:43.062 cpu : usr=98.41%, sys=1.19%, ctx=16, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472951: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.6MiB/10138msec) 00:33:43.062 slat (usec): min=9, max=114, avg=42.45, stdev=23.30 00:33:43.062 clat (msec): min=25, max=174, avg=30.37, stdev= 7.96 00:33:43.062 lat (msec): min=25, max=174, avg=30.41, stdev= 7.96 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 32], 99.50th=[ 41], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 2023, max= 2176, per=4.20%, avg=2104.35, stdev=66.72, samples=20 00:33:43.062 iops : min= 505, max= 544, avg=526.05, stdev=16.73, samples=20 00:33:43.062 lat (msec) : 50=99.70%, 250=0.30% 00:33:43.062 cpu : usr=98.59%, sys=1.05%, ctx=8, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472952: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.6MiB/10120msec) 00:33:43.062 slat (usec): min=10, max=111, avg=42.14, stdev=23.28 00:33:43.062 clat (msec): min=28, max=174, avg=30.33, stdev= 8.05 00:33:43.062 lat (msec): min=28, max=174, avg=30.38, stdev= 8.05 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 1589, max= 2176, per=4.15%, avg=2082.80, stdev=138.78, samples=20 00:33:43.062 iops : min= 397, max= 544, avg=520.65, stdev=34.79, samples=20 00:33:43.062 lat (msec) : 50=99.39%, 100=0.30%, 250=0.30% 00:33:43.062 cpu : usr=98.60%, sys=1.04%, ctx=12, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472953: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.8MiB/10153msec) 00:33:43.062 slat (usec): min=7, max=113, avg=35.98, stdev=23.93 00:33:43.062 clat (msec): min=14, max=174, avg=30.33, stdev= 7.99 00:33:43.062 lat (msec): min=14, max=174, avg=30.37, stdev= 7.99 00:33:43.062 clat percentiles (msec): 00:33:43.062 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:33:43.062 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.062 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.062 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 176], 99.95th=[ 176], 00:33:43.062 | 99.99th=[ 176] 00:33:43.062 bw ( KiB/s): min= 2048, max= 2176, per=4.23%, avg=2118.40, stdev=65.33, samples=20 00:33:43.062 iops : min= 512, max= 544, avg=529.60, stdev=16.33, samples=20 00:33:43.062 lat (msec) : 20=0.60%, 50=99.10%, 250=0.30% 00:33:43.062 cpu : usr=98.56%, sys=1.06%, ctx=12, majf=0, minf=9 00:33:43.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472954: Tue Oct 15 13:14:01 2024 00:33:43.062 read: IOPS=541, BW=2165KiB/s (2216kB/s)(21.1MiB/10005msec) 00:33:43.062 slat (nsec): min=7373, max=40325, avg=14859.22, stdev=5495.33 00:33:43.062 clat (usec): min=1685, max=47526, avg=29447.62, stdev=3815.23 00:33:43.062 lat (usec): min=1699, max=47546, avg=29462.48, stdev=3815.41 00:33:43.062 clat percentiles (usec): 00:33:43.062 | 1.00th=[ 4555], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.062 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.062 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:33:43.062 | 99.00th=[31065], 99.50th=[31065], 99.90th=[40109], 99.95th=[40109], 00:33:43.062 | 99.99th=[47449] 00:33:43.062 bw ( KiB/s): min= 2048, max= 2992, per=4.31%, avg=2159.20, stdev=206.18, samples=20 00:33:43.062 iops : min= 512, max= 748, avg=539.80, stdev=51.54, samples=20 00:33:43.062 lat (msec) : 2=0.30%, 4=0.42%, 10=0.91%, 20=1.51%, 50=96.86% 00:33:43.062 cpu : usr=98.29%, sys=1.35%, ctx=14, majf=0, minf=9 00:33:43.062 IO depths : 1=6.0%, 2=12.1%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:43.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.062 issued rwts: total=5414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.062 filename2: (groupid=0, jobs=1): err= 0: pid=1472955: Tue Oct 15 13:14:01 2024 00:33:43.063 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.8MiB/10155msec) 00:33:43.063 slat (nsec): min=8202, max=47948, avg=18809.40, stdev=4822.46 00:33:43.063 clat (msec): min=15, max=170, avg=30.42, stdev= 7.78 00:33:43.063 lat (msec): min=15, max=170, avg=30.44, stdev= 7.78 00:33:43.063 clat percentiles (msec): 00:33:43.063 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.063 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.063 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.063 | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 171], 99.95th=[ 171], 00:33:43.063 | 99.99th=[ 171] 00:33:43.063 bw ( KiB/s): min= 2048, max= 2180, per=4.23%, avg=2118.60, stdev=65.52, samples=20 00:33:43.063 iops : min= 512, max= 545, avg=529.65, stdev=16.38, samples=20 00:33:43.063 lat (msec) : 20=0.30%, 50=99.40%, 250=0.30% 00:33:43.063 cpu : usr=98.50%, sys=1.13%, ctx=15, majf=0, minf=9 00:33:43.063 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.063 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.063 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.063 filename2: (groupid=0, jobs=1): err= 0: pid=1472956: Tue Oct 15 13:14:01 2024 00:33:43.063 read: IOPS=522, BW=2088KiB/s (2138kB/s)(20.6MiB/10095msec) 00:33:43.063 slat (nsec): min=4188, max=39863, avg=15838.71, stdev=6152.60 00:33:43.063 clat (msec): min=22, max=144, avg=30.49, stdev= 6.39 00:33:43.063 lat (msec): min=22, max=144, avg=30.50, stdev= 6.39 00:33:43.063 clat percentiles (msec): 00:33:43.063 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:33:43.063 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:33:43.063 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:33:43.063 | 99.00th=[ 32], 99.50th=[ 40], 99.90th=[ 144], 99.95th=[ 144], 00:33:43.063 | 99.99th=[ 144] 00:33:43.063 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=2101.60, stdev=82.45, samples=20 00:33:43.063 iops : min= 480, max= 544, avg=525.40, stdev=20.61, samples=20 00:33:43.063 lat (msec) : 50=99.51%, 100=0.19%, 250=0.30% 00:33:43.063 cpu : usr=98.62%, sys=0.97%, ctx=11, majf=0, minf=9 00:33:43.063 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:43.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.063 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.063 issued rwts: total=5270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.063 00:33:43.063 Run status group 0 (all jobs): 00:33:43.063 READ: bw=48.9MiB/s (51.3MB/s), 2080KiB/s-2165KiB/s (2130kB/s-2216kB/s), io=497MiB (521MB), run=10005-10155msec 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 bdev_null0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 [2024-10-15 13:14:01.957775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 bdev_null1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:43.063 13:14:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:43.063 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:43.064 { 00:33:43.064 "params": { 00:33:43.064 "name": "Nvme$subsystem", 00:33:43.064 "trtype": "$TEST_TRANSPORT", 00:33:43.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.064 "adrfam": "ipv4", 00:33:43.064 "trsvcid": "$NVMF_PORT", 00:33:43.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.064 "hdgst": ${hdgst:-false}, 00:33:43.064 "ddgst": ${ddgst:-false} 00:33:43.064 }, 00:33:43.064 "method": "bdev_nvme_attach_controller" 00:33:43.064 } 00:33:43.064 EOF 00:33:43.064 )") 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:43.064 { 00:33:43.064 "params": { 00:33:43.064 "name": "Nvme$subsystem", 00:33:43.064 "trtype": "$TEST_TRANSPORT", 00:33:43.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.064 "adrfam": "ipv4", 00:33:43.064 "trsvcid": "$NVMF_PORT", 00:33:43.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.064 "hdgst": ${hdgst:-false}, 00:33:43.064 "ddgst": ${ddgst:-false} 00:33:43.064 }, 00:33:43.064 "method": "bdev_nvme_attach_controller" 00:33:43.064 } 00:33:43.064 EOF 00:33:43.064 )") 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:43.064 "params": { 00:33:43.064 "name": "Nvme0", 00:33:43.064 "trtype": "tcp", 00:33:43.064 "traddr": "10.0.0.2", 00:33:43.064 "adrfam": "ipv4", 00:33:43.064 "trsvcid": "4420", 00:33:43.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.064 "hdgst": false, 00:33:43.064 "ddgst": false 00:33:43.064 }, 00:33:43.064 "method": "bdev_nvme_attach_controller" 00:33:43.064 },{ 00:33:43.064 "params": { 00:33:43.064 "name": "Nvme1", 00:33:43.064 "trtype": "tcp", 00:33:43.064 "traddr": "10.0.0.2", 00:33:43.064 "adrfam": "ipv4", 00:33:43.064 "trsvcid": "4420", 00:33:43.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.064 "hdgst": false, 00:33:43.064 "ddgst": false 00:33:43.064 }, 00:33:43.064 "method": "bdev_nvme_attach_controller" 00:33:43.064 }' 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.064 13:14:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.064 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:43.064 ... 00:33:43.064 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:43.064 ... 00:33:43.064 fio-3.35 00:33:43.064 Starting 4 threads 00:33:48.358 00:33:48.358 filename0: (groupid=0, jobs=1): err= 0: pid=1474899: Tue Oct 15 13:14:08 2024 00:33:48.358 read: IOPS=2730, BW=21.3MiB/s (22.4MB/s)(107MiB/5003msec) 00:33:48.358 slat (nsec): min=6112, max=68525, avg=13991.75, stdev=8138.42 00:33:48.358 clat (usec): min=1089, max=43015, avg=2887.47, stdev=1039.57 00:33:48.358 lat (usec): min=1096, max=43040, avg=2901.46, stdev=1039.72 00:33:48.358 clat percentiles (usec): 00:33:48.358 | 1.00th=[ 1844], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:33:48.358 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:33:48.358 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3392], 00:33:48.358 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[42730], 00:33:48.358 | 99.99th=[43254] 00:33:48.358 bw ( KiB/s): min=19984, max=23072, per=25.76%, avg=21916.44, stdev=876.06, samples=9 00:33:48.358 iops : min= 2498, max= 2884, avg=2739.56, stdev=109.51, samples=9 00:33:48.358 lat (msec) : 2=1.92%, 4=97.29%, 10=0.73%, 50=0.06% 00:33:48.358 cpu : usr=95.14%, sys=3.76%, ctx=160, majf=0, minf=9 00:33:48.358 IO depths : 1=0.4%, 2=6.7%, 4=64.0%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 issued rwts: total=13662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.358 filename0: (groupid=0, jobs=1): err= 0: pid=1474900: Tue Oct 15 13:14:08 2024 00:33:48.358 read: IOPS=2621, BW=20.5MiB/s (21.5MB/s)(102MiB/5002msec) 00:33:48.358 slat (nsec): min=6000, max=68541, avg=13659.11, stdev=10792.61 00:33:48.358 clat (usec): min=806, max=5459, avg=3009.82, stdev=420.26 00:33:48.358 lat (usec): min=813, max=5512, avg=3023.48, stdev=420.55 00:33:48.358 clat percentiles (usec): 00:33:48.358 | 1.00th=[ 1614], 5.00th=[ 2376], 10.00th=[ 2638], 20.00th=[ 2835], 00:33:48.358 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:33:48.358 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3687], 00:33:48.358 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5211], 00:33:48.358 | 99.99th=[ 5407] 00:33:48.358 bw ( KiB/s): min=20288, max=22736, per=24.66%, avg=20979.56, stdev=736.90, samples=9 00:33:48.358 iops : min= 2536, max= 2842, avg=2622.44, stdev=92.11, samples=9 00:33:48.358 lat (usec) : 1000=0.02% 00:33:48.358 lat (msec) : 2=2.09%, 4=95.37%, 10=2.52% 00:33:48.358 cpu : usr=96.60%, sys=3.06%, ctx=9, majf=0, minf=9 00:33:48.358 IO depths : 1=0.3%, 2=4.1%, 4=68.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 issued rwts: total=13112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.358 filename1: (groupid=0, jobs=1): err= 0: pid=1474901: Tue Oct 15 13:14:08 2024 00:33:48.358 read: IOPS=2630, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:33:48.358 slat (nsec): min=5990, max=71774, avg=13840.40, stdev=11117.18 00:33:48.358 clat (usec): min=514, max=5732, avg=2996.26, stdev=424.60 00:33:48.358 lat (usec): min=521, max=5749, avg=3010.10, stdev=424.54 00:33:48.358 clat percentiles (usec): 00:33:48.358 | 1.00th=[ 1598], 5.00th=[ 2376], 10.00th=[ 2638], 20.00th=[ 2835], 00:33:48.358 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:33:48.358 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3687], 00:33:48.358 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 5276], 00:33:48.358 | 99.99th=[ 5735] 00:33:48.358 bw ( KiB/s): min=20096, max=22509, per=24.76%, avg=21064.56, stdev=647.56, samples=9 00:33:48.358 iops : min= 2512, max= 2813, avg=2633.00, stdev=80.77, samples=9 00:33:48.358 lat (usec) : 750=0.03%, 1000=0.05% 00:33:48.358 lat (msec) : 2=2.39%, 4=94.98%, 10=2.55% 00:33:48.358 cpu : usr=96.68%, sys=2.98%, ctx=6, majf=0, minf=9 00:33:48.358 IO depths : 1=0.2%, 2=6.6%, 4=66.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 issued rwts: total=13154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.358 filename1: (groupid=0, jobs=1): err= 0: pid=1474902: Tue Oct 15 13:14:08 2024 00:33:48.358 read: IOPS=2653, BW=20.7MiB/s (21.7MB/s)(104MiB/5001msec) 00:33:48.358 slat (nsec): min=6024, max=71744, avg=13890.43, stdev=10818.50 00:33:48.358 clat (usec): min=710, max=43492, avg=2971.36, stdev=1063.48 00:33:48.358 lat (usec): min=718, max=43517, avg=2985.25, stdev=1063.64 00:33:48.358 clat percentiles (usec): 00:33:48.358 | 1.00th=[ 1909], 5.00th=[ 2278], 10.00th=[ 2507], 20.00th=[ 2737], 00:33:48.358 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:33:48.358 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3556], 00:33:48.358 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[43254], 00:33:48.358 | 99.99th=[43254] 00:33:48.358 bw ( KiB/s): min=19952, max=21824, per=24.96%, avg=21232.00, stdev=571.09, samples=9 00:33:48.358 iops : min= 2494, max= 2728, avg=2654.00, stdev=71.39, samples=9 00:33:48.358 lat (usec) : 750=0.02%, 1000=0.02% 00:33:48.358 lat (msec) : 2=1.33%, 4=97.08%, 10=1.48%, 50=0.06% 00:33:48.358 cpu : usr=96.34%, sys=3.34%, ctx=7, majf=0, minf=9 00:33:48.358 IO depths : 1=0.3%, 2=5.8%, 4=66.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.358 issued rwts: total=13272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.358 00:33:48.358 Run status group 0 (all jobs): 00:33:48.358 READ: bw=83.1MiB/s (87.1MB/s), 20.5MiB/s-21.3MiB/s (21.5MB/s-22.4MB/s), io=416MiB (436MB), run=5001-5003msec 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.358 00:33:48.358 real 0m24.194s 00:33:48.358 user 4m54.684s 00:33:48.358 sys 0m5.202s 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 ************************************ 00:33:48.358 END TEST fio_dif_rand_params 00:33:48.358 ************************************ 00:33:48.358 13:14:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:48.358 13:14:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:48.358 13:14:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:48.358 13:14:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.358 ************************************ 00:33:48.358 START TEST fio_dif_digest 00:33:48.358 ************************************ 00:33:48.358 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:33:48.358 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:48.358 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.359 bdev_null0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.359 [2024-10-15 13:14:08.339339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:48.359 { 00:33:48.359 "params": { 00:33:48.359 "name": "Nvme$subsystem", 00:33:48.359 "trtype": "$TEST_TRANSPORT", 00:33:48.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.359 "adrfam": "ipv4", 00:33:48.359 "trsvcid": "$NVMF_PORT", 00:33:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.359 "hdgst": ${hdgst:-false}, 00:33:48.359 "ddgst": ${ddgst:-false} 00:33:48.359 }, 00:33:48.359 "method": "bdev_nvme_attach_controller" 00:33:48.359 } 00:33:48.359 EOF 00:33:48.359 )") 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:48.359 "params": { 00:33:48.359 "name": "Nvme0", 00:33:48.359 "trtype": "tcp", 00:33:48.359 "traddr": "10.0.0.2", 00:33:48.359 "adrfam": "ipv4", 00:33:48.359 "trsvcid": "4420", 00:33:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.359 "hdgst": true, 00:33:48.359 "ddgst": true 00:33:48.359 }, 00:33:48.359 "method": "bdev_nvme_attach_controller" 00:33:48.359 }' 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:48.359 13:14:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.623 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:48.623 ... 00:33:48.623 fio-3.35 00:33:48.623 Starting 3 threads 00:34:00.824 00:34:00.824 filename0: (groupid=0, jobs=1): err= 0: pid=1475969: Tue Oct 15 13:14:19 2024 00:34:00.824 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(379MiB/10046msec) 00:34:00.824 slat (nsec): min=6430, max=40951, avg=16660.52, stdev=6748.77 00:34:00.824 clat (usec): min=7851, max=49909, avg=9914.21, stdev=1202.27 00:34:00.824 lat (usec): min=7865, max=49933, avg=9930.87, stdev=1202.34 00:34:00.824 clat percentiles (usec): 00:34:00.824 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:34:00.824 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:34:00.824 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:00.824 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12387], 99.95th=[49546], 00:34:00.824 | 99.99th=[50070] 00:34:00.824 bw ( KiB/s): min=37888, max=39936, per=35.77%, avg=38745.60, stdev=540.03, samples=20 00:34:00.824 iops : min= 296, max= 312, avg=302.70, stdev= 4.22, samples=20 00:34:00.824 lat (msec) : 10=55.81%, 20=44.13%, 50=0.07% 00:34:00.824 cpu : usr=96.06%, sys=3.64%, ctx=17, majf=0, minf=45 00:34:00.824 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.824 filename0: (groupid=0, jobs=1): err= 0: pid=1475970: Tue Oct 15 13:14:19 2024 00:34:00.824 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(348MiB/10046msec) 00:34:00.824 slat (usec): min=6, max=235, avg=20.81, stdev= 7.10 00:34:00.824 clat (usec): min=7596, max=48472, avg=10802.48, stdev=1217.67 00:34:00.824 lat (usec): min=7620, max=48495, avg=10823.29, stdev=1217.62 00:34:00.824 clat percentiles (usec): 00:34:00.824 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:34:00.824 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:34:00.824 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:34:00.824 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14222], 99.95th=[46400], 00:34:00.824 | 99.99th=[48497] 00:34:00.824 bw ( KiB/s): min=35072, max=36352, per=32.82%, avg=35558.40, stdev=388.69, samples=20 00:34:00.824 iops : min= 274, max= 284, avg=277.80, stdev= 3.04, samples=20 00:34:00.824 lat (msec) : 10=12.52%, 20=87.41%, 50=0.07% 00:34:00.824 cpu : usr=95.76%, sys=3.91%, ctx=22, majf=0, minf=121 00:34:00.824 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 issued rwts: total=2780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.824 filename0: (groupid=0, jobs=1): err= 0: pid=1475971: Tue Oct 15 13:14:19 2024 00:34:00.824 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(337MiB/10046msec) 00:34:00.824 slat (nsec): min=6453, max=45479, avg=16461.09, stdev=6666.63 00:34:00.824 clat (usec): min=8660, max=49562, avg=11161.05, stdev=1250.47 00:34:00.824 lat (usec): min=8673, max=49588, avg=11177.51, stdev=1250.77 00:34:00.824 clat percentiles (usec): 00:34:00.824 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:34:00.824 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:34:00.824 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:34:00.824 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14484], 99.95th=[48497], 00:34:00.824 | 99.99th=[49546] 00:34:00.824 bw ( KiB/s): min=33792, max=35328, per=31.79%, avg=34432.00, stdev=473.50, samples=20 00:34:00.824 iops : min= 264, max= 276, avg=269.00, stdev= 3.70, samples=20 00:34:00.824 lat (msec) : 10=4.01%, 20=95.91%, 50=0.07% 00:34:00.824 cpu : usr=96.34%, sys=3.34%, ctx=64, majf=0, minf=31 00:34:00.824 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.824 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.824 00:34:00.824 Run status group 0 (all jobs): 00:34:00.824 READ: bw=106MiB/s (111MB/s), 33.5MiB/s-37.7MiB/s (35.1MB/s-39.5MB/s), io=1063MiB (1114MB), run=10046-10046msec 00:34:00.824 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.825 00:34:00.825 real 0m11.197s 00:34:00.825 user 0m35.948s 00:34:00.825 sys 0m1.401s 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.825 13:14:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.825 ************************************ 00:34:00.825 END TEST fio_dif_digest 00:34:00.825 ************************************ 00:34:00.825 13:14:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:00.825 13:14:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.825 rmmod nvme_tcp 00:34:00.825 rmmod nvme_fabrics 00:34:00.825 rmmod nvme_keyring 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1467584 ']' 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1467584 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1467584 ']' 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1467584 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467584 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467584' 00:34:00.825 killing process with pid 1467584 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1467584 00:34:00.825 13:14:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1467584 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:34:00.825 13:14:19 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:02.204 Waiting for block devices as requested 00:34:02.464 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:02.464 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:02.464 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:02.722 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:02.722 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:02.722 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:02.981 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.981 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:02.981 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:03.239 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:03.239 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:03.239 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:03.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:03.498 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:03.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:03.756 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.756 13:14:23 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.756 13:14:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.756 13:14:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.292 13:14:25 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.292 00:34:06.292 real 1m13.778s 00:34:06.292 user 7m12.513s 00:34:06.292 sys 0m20.428s 00:34:06.292 13:14:26 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:06.292 13:14:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:06.292 ************************************ 00:34:06.292 END TEST nvmf_dif 00:34:06.292 ************************************ 00:34:06.292 13:14:26 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:06.292 13:14:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:06.292 13:14:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:06.292 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:34:06.292 ************************************ 00:34:06.292 START TEST nvmf_abort_qd_sizes 00:34:06.292 ************************************ 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:06.292 * Looking for test storage... 00:34:06.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:06.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.292 --rc genhtml_branch_coverage=1 00:34:06.292 --rc genhtml_function_coverage=1 00:34:06.292 --rc genhtml_legend=1 00:34:06.292 --rc geninfo_all_blocks=1 00:34:06.292 --rc geninfo_unexecuted_blocks=1 00:34:06.292 00:34:06.292 ' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:06.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.292 --rc genhtml_branch_coverage=1 00:34:06.292 --rc genhtml_function_coverage=1 00:34:06.292 --rc genhtml_legend=1 00:34:06.292 --rc geninfo_all_blocks=1 00:34:06.292 --rc geninfo_unexecuted_blocks=1 00:34:06.292 00:34:06.292 ' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:06.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.292 --rc genhtml_branch_coverage=1 00:34:06.292 --rc genhtml_function_coverage=1 00:34:06.292 --rc genhtml_legend=1 00:34:06.292 --rc geninfo_all_blocks=1 00:34:06.292 --rc geninfo_unexecuted_blocks=1 00:34:06.292 00:34:06.292 ' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:06.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.292 --rc genhtml_branch_coverage=1 00:34:06.292 --rc genhtml_function_coverage=1 00:34:06.292 --rc genhtml_legend=1 00:34:06.292 --rc geninfo_all_blocks=1 00:34:06.292 --rc geninfo_unexecuted_blocks=1 00:34:06.292 00:34:06.292 ' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.292 13:14:26 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.293 13:14:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.567 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:11.568 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:11.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:11.568 Found net devices under 0000:86:00.0: cvl_0_0 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:11.568 Found net devices under 0000:86:00.1: cvl_0_1 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.568 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.827 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.827 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.827 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.827 13:14:31 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:34:11.827 00:34:11.827 --- 10.0.0.2 ping statistics --- 00:34:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.827 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:34:11.827 00:34:11.827 --- 10.0.0.1 ping statistics --- 00:34:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.827 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:34:11.827 13:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:15.117 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:15.117 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:16.055 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1483995 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1483995 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1483995 ']' 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:16.314 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.314 [2024-10-15 13:14:36.589778] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:34:16.314 [2024-10-15 13:14:36.589828] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.572 [2024-10-15 13:14:36.664834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.572 [2024-10-15 13:14:36.706730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.572 [2024-10-15 13:14:36.706773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.572 [2024-10-15 13:14:36.706780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.572 [2024-10-15 13:14:36.706786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.572 [2024-10-15 13:14:36.706792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.572 [2024-10-15 13:14:36.708349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.572 [2024-10-15 13:14:36.708458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.572 [2024-10-15 13:14:36.708564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.572 [2024-10-15 13:14:36.708565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.572 13:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.572 ************************************ 00:34:16.572 START TEST spdk_target_abort 00:34:16.572 ************************************ 00:34:16.829 13:14:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:34:16.829 13:14:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:16.829 13:14:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:16.829 13:14:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.829 13:14:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 spdk_targetn1 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 [2024-10-15 13:14:39.723809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 [2024-10-15 13:14:39.770285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:20.107 13:14:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:23.384 Initializing NVMe Controllers 00:34:23.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:23.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:23.384 Initialization complete. Launching workers. 00:34:23.384 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16897, failed: 0 00:34:23.384 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1301, failed to submit 15596 00:34:23.384 success 769, unsuccessful 532, failed 0 00:34:23.384 13:14:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:23.384 13:14:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:26.660 Initializing NVMe Controllers 00:34:26.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:26.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:26.660 Initialization complete. Launching workers. 00:34:26.660 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8520, failed: 0 00:34:26.660 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7281 00:34:26.660 success 316, unsuccessful 923, failed 0 00:34:26.660 13:14:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:26.660 13:14:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.938 Initializing NVMe Controllers 00:34:29.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:29.938 Initialization complete. Launching workers. 00:34:29.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38342, failed: 0 00:34:29.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2690, failed to submit 35652 00:34:29.938 success 595, unsuccessful 2095, failed 0 00:34:29.938 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:29.938 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.938 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.939 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.939 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:29.939 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.939 13:14:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1483995 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1483995 ']' 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1483995 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483995 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483995' 00:34:31.311 killing process with pid 1483995 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1483995 00:34:31.311 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1483995 00:34:31.570 00:34:31.570 real 0m14.753s 00:34:31.570 user 0m56.222s 00:34:31.570 sys 0m2.714s 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.570 ************************************ 00:34:31.570 END TEST spdk_target_abort 00:34:31.570 ************************************ 00:34:31.570 13:14:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:31.570 13:14:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:31.570 13:14:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:31.570 13:14:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.570 ************************************ 00:34:31.570 START TEST kernel_target_abort 00:34:31.570 ************************************ 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:31.570 13:14:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:34.103 Waiting for block devices as requested 00:34:34.362 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:34.362 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:34.362 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:34.621 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:34.621 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:34.621 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:34.880 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:34.880 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:34.880 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:35.139 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.139 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:35.139 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:35.139 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:35.398 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:35.398 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:35.398 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:35.657 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:35.657 No valid GPT data, bailing 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:34:35.657 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:35.916 13:14:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:35.916 00:34:35.916 Discovery Log Number of Records 2, Generation counter 2 00:34:35.916 =====Discovery Log Entry 0====== 00:34:35.916 trtype: tcp 00:34:35.916 adrfam: ipv4 00:34:35.916 subtype: current discovery subsystem 00:34:35.916 treq: not specified, sq flow control disable supported 00:34:35.916 portid: 1 00:34:35.916 trsvcid: 4420 00:34:35.916 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:35.916 traddr: 10.0.0.1 00:34:35.916 eflags: none 00:34:35.916 sectype: none 00:34:35.916 =====Discovery Log Entry 1====== 00:34:35.916 trtype: tcp 00:34:35.916 adrfam: ipv4 00:34:35.916 subtype: nvme subsystem 00:34:35.916 treq: not specified, sq flow control disable supported 00:34:35.916 portid: 1 00:34:35.916 trsvcid: 4420 00:34:35.916 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:35.916 traddr: 10.0.0.1 00:34:35.916 eflags: none 00:34:35.916 sectype: none 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:35.916 13:14:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.306 Initializing NVMe Controllers 00:34:39.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.306 Initialization complete. Launching workers. 00:34:39.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94976, failed: 0 00:34:39.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94976, failed to submit 0 00:34:39.306 success 0, unsuccessful 94976, failed 0 00:34:39.306 13:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.306 13:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.594 Initializing NVMe Controllers 00:34:42.594 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.594 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.594 Initialization complete. Launching workers. 00:34:42.594 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146403, failed: 0 00:34:42.594 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36814, failed to submit 109589 00:34:42.594 success 0, unsuccessful 36814, failed 0 00:34:42.594 13:15:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.594 13:15:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.129 Initializing NVMe Controllers 00:34:45.129 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.129 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.129 Initialization complete. Launching workers. 00:34:45.129 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136899, failed: 0 00:34:45.129 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34302, failed to submit 102597 00:34:45.129 success 0, unsuccessful 34302, failed 0 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:45.129 13:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:48.421 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:48.421 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:49.799 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:49.799 00:34:49.799 real 0m18.168s 00:34:49.799 user 0m9.076s 00:34:49.799 sys 0m5.119s 00:34:49.799 13:15:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.799 13:15:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.799 ************************************ 00:34:49.799 END TEST kernel_target_abort 00:34:49.799 ************************************ 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.799 rmmod nvme_tcp 00:34:49.799 rmmod nvme_fabrics 00:34:49.799 rmmod nvme_keyring 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1483995 ']' 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1483995 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1483995 ']' 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1483995 00:34:49.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1483995) - No such process 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1483995 is not found' 00:34:49.799 Process with pid 1483995 is not found 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:34:49.799 13:15:09 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:52.336 Waiting for block devices as requested 00:34:52.595 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.595 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.854 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.854 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:52.854 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:52.854 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.113 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.113 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.113 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.373 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.373 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.373 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.373 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.632 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.632 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.632 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.891 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.891 13:15:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.428 13:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.428 00:34:56.428 real 0m50.085s 00:34:56.428 user 1m9.539s 00:34:56.428 sys 0m16.691s 00:34:56.428 13:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:56.428 13:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.428 ************************************ 00:34:56.428 END TEST nvmf_abort_qd_sizes 00:34:56.428 ************************************ 00:34:56.428 13:15:16 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:56.428 13:15:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:56.428 13:15:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:56.428 13:15:16 -- common/autotest_common.sh@10 -- # set +x 00:34:56.428 ************************************ 00:34:56.428 START TEST keyring_file 00:34:56.428 ************************************ 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:56.428 * Looking for test storage... 00:34:56.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.428 --rc genhtml_branch_coverage=1 00:34:56.428 --rc genhtml_function_coverage=1 00:34:56.428 --rc genhtml_legend=1 00:34:56.428 --rc geninfo_all_blocks=1 00:34:56.428 --rc geninfo_unexecuted_blocks=1 00:34:56.428 00:34:56.428 ' 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.428 --rc genhtml_branch_coverage=1 00:34:56.428 --rc genhtml_function_coverage=1 00:34:56.428 --rc genhtml_legend=1 00:34:56.428 --rc geninfo_all_blocks=1 00:34:56.428 --rc geninfo_unexecuted_blocks=1 00:34:56.428 00:34:56.428 ' 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.428 --rc genhtml_branch_coverage=1 00:34:56.428 --rc genhtml_function_coverage=1 00:34:56.428 --rc genhtml_legend=1 00:34:56.428 --rc geninfo_all_blocks=1 00:34:56.428 --rc geninfo_unexecuted_blocks=1 00:34:56.428 00:34:56.428 ' 00:34:56.428 13:15:16 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.428 --rc genhtml_branch_coverage=1 00:34:56.428 --rc genhtml_function_coverage=1 00:34:56.428 --rc genhtml_legend=1 00:34:56.428 --rc geninfo_all_blocks=1 00:34:56.428 --rc geninfo_unexecuted_blocks=1 00:34:56.428 00:34:56.428 ' 00:34:56.428 13:15:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:56.428 13:15:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.428 13:15:16 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.428 13:15:16 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.428 13:15:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.429 13:15:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.429 13:15:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.429 13:15:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:56.429 13:15:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:56.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PAk0LlKFU7 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@731 -- # python - 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PAk0LlKFU7 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PAk0LlKFU7 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PAk0LlKFU7 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eWHwBUXRdr 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:34:56.429 13:15:16 keyring_file -- nvmf/common.sh@731 -- # python - 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eWHwBUXRdr 00:34:56.429 13:15:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eWHwBUXRdr 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eWHwBUXRdr 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=1493294 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:56.429 13:15:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1493294 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1493294 ']' 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.429 13:15:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.429 [2024-10-15 13:15:16.592244] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:34:56.429 [2024-10-15 13:15:16.592294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493294 ] 00:34:56.429 [2024-10-15 13:15:16.660397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.429 [2024-10-15 13:15:16.702407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:34:56.688 13:15:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 [2024-10-15 13:15:16.926495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.688 null0 00:34:56.688 [2024-10-15 13:15:16.958546] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:56.688 [2024-10-15 13:15:16.958924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.688 13:15:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.688 [2024-10-15 13:15:16.986614] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:56.688 request: 00:34:56.688 { 00:34:56.688 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.688 "secure_channel": false, 00:34:56.688 "listen_address": { 00:34:56.688 "trtype": "tcp", 00:34:56.688 "traddr": "127.0.0.1", 00:34:56.688 "trsvcid": "4420" 00:34:56.688 }, 00:34:56.688 "method": "nvmf_subsystem_add_listener", 00:34:56.688 "req_id": 1 00:34:56.688 } 00:34:56.688 Got JSON-RPC error response 00:34:56.688 response: 00:34:56.688 { 00:34:56.688 "code": -32602, 00:34:56.688 "message": "Invalid parameters" 00:34:56.688 } 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:56.688 13:15:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=1493311 00:34:56.688 13:15:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1493311 /var/tmp/bperf.sock 00:34:56.688 13:15:16 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1493311 ']' 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.688 13:15:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.946 [2024-10-15 13:15:17.041180] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:34:56.946 [2024-10-15 13:15:17.041221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493311 ] 00:34:56.946 [2024-10-15 13:15:17.108734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.946 [2024-10-15 13:15:17.150685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.946 13:15:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.946 13:15:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:34:56.946 13:15:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:34:56.946 13:15:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:34:57.203 13:15:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eWHwBUXRdr 00:34:57.203 13:15:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eWHwBUXRdr 00:34:57.461 13:15:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:57.461 13:15:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:57.461 13:15:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.461 13:15:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.461 13:15:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.741 13:15:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PAk0LlKFU7 == \/\t\m\p\/\t\m\p\.\P\A\k\0\L\l\K\F\U\7 ]] 00:34:57.741 13:15:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:57.741 13:15:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:57.741 13:15:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.742 13:15:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.eWHwBUXRdr == \/\t\m\p\/\t\m\p\.\e\W\H\w\B\U\X\R\d\r ]] 00:34:57.742 13:15:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.742 13:15:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.007 13:15:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:58.007 13:15:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:58.007 13:15:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:58.007 13:15:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.007 13:15:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.007 13:15:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.007 13:15:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.265 13:15:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:58.265 13:15:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.265 13:15:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.265 [2024-10-15 13:15:18.531858] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:58.523 nvme0n1 00:34:58.524 13:15:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.524 13:15:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:58.524 13:15:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.524 13:15:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.782 13:15:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:58.782 13:15:19 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.040 Running I/O for 1 seconds... 00:34:59.974 19296.00 IOPS, 75.38 MiB/s 00:34:59.974 Latency(us) 00:34:59.974 [2024-10-15T11:15:20.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.974 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:59.974 nvme0n1 : 1.00 19336.75 75.53 0.00 0.00 6606.01 2652.65 9986.44 00:34:59.974 [2024-10-15T11:15:20.293Z] =================================================================================================================== 00:34:59.974 [2024-10-15T11:15:20.293Z] Total : 19336.75 75.53 0.00 0.00 6606.01 2652.65 9986.44 00:34:59.974 { 00:34:59.974 "results": [ 00:34:59.974 { 00:34:59.974 "job": "nvme0n1", 00:34:59.974 "core_mask": "0x2", 00:34:59.974 "workload": "randrw", 00:34:59.974 "percentage": 50, 00:34:59.974 "status": "finished", 00:34:59.974 "queue_depth": 128, 00:34:59.974 "io_size": 4096, 00:34:59.974 "runtime": 1.004564, 00:34:59.974 "iops": 19336.747086298135, 00:34:59.974 "mibps": 75.53416830585209, 00:34:59.974 "io_failed": 0, 00:34:59.974 "io_timeout": 0, 00:34:59.974 "avg_latency_us": 6606.009521161978, 00:34:59.974 "min_latency_us": 2652.647619047619, 00:34:59.974 "max_latency_us": 9986.438095238096 00:34:59.974 } 00:34:59.974 ], 00:34:59.974 "core_count": 1 00:34:59.974 } 00:34:59.974 13:15:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:59.974 13:15:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:00.233 13:15:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.233 13:15:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:00.233 13:15:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.233 13:15:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.491 13:15:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:00.491 13:15:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:00.491 13:15:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.491 13:15:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.750 [2024-10-15 13:15:20.910297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:00.750 [2024-10-15 13:15:20.911120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec32a0 (107): Transport endpoint is not connected 00:35:00.750 [2024-10-15 13:15:20.912114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec32a0 (9): Bad file descriptor 00:35:00.750 [2024-10-15 13:15:20.913116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:00.750 [2024-10-15 13:15:20.913126] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:00.750 [2024-10-15 13:15:20.913133] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:00.750 [2024-10-15 13:15:20.913141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:00.750 request: 00:35:00.750 { 00:35:00.750 "name": "nvme0", 00:35:00.750 "trtype": "tcp", 00:35:00.750 "traddr": "127.0.0.1", 00:35:00.750 "adrfam": "ipv4", 00:35:00.750 "trsvcid": "4420", 00:35:00.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:00.750 "prchk_reftag": false, 00:35:00.750 "prchk_guard": false, 00:35:00.750 "hdgst": false, 00:35:00.750 "ddgst": false, 00:35:00.750 "psk": "key1", 00:35:00.750 "allow_unrecognized_csi": false, 00:35:00.750 "method": "bdev_nvme_attach_controller", 00:35:00.750 "req_id": 1 00:35:00.750 } 00:35:00.750 Got JSON-RPC error response 00:35:00.750 response: 00:35:00.750 { 00:35:00.750 "code": -5, 00:35:00.750 "message": "Input/output error" 00:35:00.750 } 00:35:00.750 13:15:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:00.750 13:15:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:00.750 13:15:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:00.750 13:15:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:00.750 13:15:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:00.750 13:15:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.750 13:15:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.750 13:15:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.750 13:15:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.750 13:15:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.008 13:15:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:01.008 13:15:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:01.008 13:15:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:01.008 13:15:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:01.008 13:15:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.008 13:15:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.008 13:15:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:01.266 13:15:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:01.266 13:15:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:01.266 13:15:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:01.266 13:15:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:01.266 13:15:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:01.524 13:15:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:01.524 13:15:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:01.524 13:15:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.782 13:15:21 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:01.782 13:15:21 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.PAk0LlKFU7 00:35:01.782 13:15:21 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:01.782 13:15:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:01.782 13:15:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:01.782 [2024-10-15 13:15:22.103327] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PAk0LlKFU7': 0100660 00:35:01.782 [2024-10-15 13:15:22.103356] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:02.040 request: 00:35:02.040 { 00:35:02.040 "name": "key0", 00:35:02.040 "path": "/tmp/tmp.PAk0LlKFU7", 00:35:02.040 "method": "keyring_file_add_key", 00:35:02.040 "req_id": 1 00:35:02.040 } 00:35:02.040 Got JSON-RPC error response 00:35:02.040 response: 00:35:02.040 { 00:35:02.040 "code": -1, 00:35:02.040 "message": "Operation not permitted" 00:35:02.040 } 00:35:02.040 13:15:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:02.040 13:15:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:02.040 13:15:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:02.040 13:15:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:02.040 13:15:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.PAk0LlKFU7 00:35:02.040 13:15:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PAk0LlKFU7 00:35:02.040 13:15:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.PAk0LlKFU7 00:35:02.040 13:15:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.040 13:15:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.297 13:15:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:02.297 13:15:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:02.297 13:15:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.297 13:15:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.555 [2024-10-15 13:15:22.684875] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PAk0LlKFU7': No such file or directory 00:35:02.555 [2024-10-15 13:15:22.684901] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:02.555 [2024-10-15 13:15:22.684917] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:02.555 [2024-10-15 13:15:22.684924] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:02.555 [2024-10-15 13:15:22.684931] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:02.555 [2024-10-15 13:15:22.684937] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:02.555 request: 00:35:02.555 { 00:35:02.555 "name": "nvme0", 00:35:02.555 "trtype": "tcp", 00:35:02.555 "traddr": "127.0.0.1", 00:35:02.555 "adrfam": "ipv4", 00:35:02.555 "trsvcid": "4420", 00:35:02.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.555 "prchk_reftag": false, 00:35:02.555 "prchk_guard": false, 00:35:02.555 "hdgst": false, 00:35:02.555 "ddgst": false, 00:35:02.555 "psk": "key0", 00:35:02.555 "allow_unrecognized_csi": false, 00:35:02.555 "method": "bdev_nvme_attach_controller", 00:35:02.555 "req_id": 1 00:35:02.555 } 00:35:02.555 Got JSON-RPC error response 00:35:02.555 response: 00:35:02.555 { 00:35:02.555 "code": -19, 00:35:02.555 "message": "No such device" 00:35:02.555 } 00:35:02.555 13:15:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:02.555 13:15:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:02.555 13:15:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:02.555 13:15:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:02.555 13:15:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:02.556 13:15:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:02.556 13:15:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:02.556 13:15:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:02.814 13:15:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3yfQrMiBLP 00:35:02.814 13:15:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.814 13:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:03.071 nvme0n1 00:35:03.328 13:15:23 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.328 13:15:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:03.328 13:15:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:03.328 13:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:03.585 13:15:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:03.585 13:15:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:03.585 13:15:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.585 13:15:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.585 13:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.843 13:15:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:03.843 13:15:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:03.843 13:15:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.843 13:15:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.843 13:15:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.843 13:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.843 13:15:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.101 13:15:24 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:04.101 13:15:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:04.101 13:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:04.101 13:15:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:04.101 13:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.101 13:15:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:04.359 13:15:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:04.359 13:15:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3yfQrMiBLP 00:35:04.359 13:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3yfQrMiBLP 00:35:04.616 13:15:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eWHwBUXRdr 00:35:04.616 13:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eWHwBUXRdr 00:35:04.616 13:15:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.616 13:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.874 nvme0n1 00:35:04.874 13:15:25 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:04.874 13:15:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:05.132 13:15:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:05.132 "subsystems": [ 00:35:05.132 { 00:35:05.132 "subsystem": "keyring", 00:35:05.132 "config": [ 00:35:05.132 { 00:35:05.132 "method": "keyring_file_add_key", 00:35:05.132 "params": { 00:35:05.132 "name": "key0", 00:35:05.132 "path": "/tmp/tmp.3yfQrMiBLP" 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "keyring_file_add_key", 00:35:05.132 "params": { 00:35:05.132 "name": "key1", 00:35:05.132 "path": "/tmp/tmp.eWHwBUXRdr" 00:35:05.132 } 00:35:05.132 } 00:35:05.132 ] 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "subsystem": "iobuf", 00:35:05.132 "config": [ 00:35:05.132 { 00:35:05.132 "method": "iobuf_set_options", 00:35:05.132 "params": { 00:35:05.132 "small_pool_count": 8192, 00:35:05.132 "large_pool_count": 1024, 00:35:05.132 "small_bufsize": 8192, 00:35:05.132 "large_bufsize": 135168 00:35:05.132 } 00:35:05.132 } 00:35:05.132 ] 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "subsystem": "sock", 00:35:05.132 "config": [ 00:35:05.132 { 00:35:05.132 "method": "sock_set_default_impl", 00:35:05.132 "params": { 00:35:05.132 "impl_name": "posix" 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "sock_impl_set_options", 00:35:05.132 "params": { 00:35:05.132 "impl_name": "ssl", 00:35:05.132 "recv_buf_size": 4096, 00:35:05.132 "send_buf_size": 4096, 00:35:05.132 "enable_recv_pipe": true, 00:35:05.132 "enable_quickack": false, 00:35:05.132 "enable_placement_id": 0, 00:35:05.132 "enable_zerocopy_send_server": true, 00:35:05.132 "enable_zerocopy_send_client": false, 00:35:05.132 "zerocopy_threshold": 0, 00:35:05.132 "tls_version": 0, 00:35:05.132 "enable_ktls": false 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "sock_impl_set_options", 00:35:05.132 "params": { 00:35:05.132 "impl_name": "posix", 00:35:05.132 "recv_buf_size": 2097152, 00:35:05.132 "send_buf_size": 2097152, 00:35:05.132 "enable_recv_pipe": true, 00:35:05.132 "enable_quickack": false, 00:35:05.132 "enable_placement_id": 0, 00:35:05.132 "enable_zerocopy_send_server": true, 00:35:05.132 "enable_zerocopy_send_client": false, 00:35:05.132 "zerocopy_threshold": 0, 00:35:05.132 "tls_version": 0, 00:35:05.132 "enable_ktls": false 00:35:05.132 } 00:35:05.132 } 00:35:05.132 ] 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "subsystem": "vmd", 00:35:05.132 "config": [] 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "subsystem": "accel", 00:35:05.132 "config": [ 00:35:05.132 { 00:35:05.132 "method": "accel_set_options", 00:35:05.132 "params": { 00:35:05.132 "small_cache_size": 128, 00:35:05.132 "large_cache_size": 16, 00:35:05.132 "task_count": 2048, 00:35:05.132 "sequence_count": 2048, 00:35:05.132 "buf_count": 2048 00:35:05.132 } 00:35:05.132 } 00:35:05.132 ] 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "subsystem": "bdev", 00:35:05.132 "config": [ 00:35:05.132 { 00:35:05.132 "method": "bdev_set_options", 00:35:05.132 "params": { 00:35:05.132 "bdev_io_pool_size": 65535, 00:35:05.132 "bdev_io_cache_size": 256, 00:35:05.132 "bdev_auto_examine": true, 00:35:05.132 "iobuf_small_cache_size": 128, 00:35:05.132 "iobuf_large_cache_size": 16 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "bdev_raid_set_options", 00:35:05.132 "params": { 00:35:05.132 "process_window_size_kb": 1024, 00:35:05.132 "process_max_bandwidth_mb_sec": 0 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "bdev_iscsi_set_options", 00:35:05.132 "params": { 00:35:05.132 "timeout_sec": 30 00:35:05.132 } 00:35:05.132 }, 00:35:05.132 { 00:35:05.132 "method": "bdev_nvme_set_options", 00:35:05.132 "params": { 00:35:05.132 "action_on_timeout": "none", 00:35:05.132 "timeout_us": 0, 00:35:05.132 "timeout_admin_us": 0, 00:35:05.132 "keep_alive_timeout_ms": 10000, 00:35:05.132 "arbitration_burst": 0, 00:35:05.132 "low_priority_weight": 0, 00:35:05.132 "medium_priority_weight": 0, 00:35:05.132 "high_priority_weight": 0, 00:35:05.132 "nvme_adminq_poll_period_us": 10000, 00:35:05.132 "nvme_ioq_poll_period_us": 0, 00:35:05.132 "io_queue_requests": 512, 00:35:05.132 "delay_cmd_submit": true, 00:35:05.132 "transport_retry_count": 4, 00:35:05.132 "bdev_retry_count": 3, 00:35:05.132 "transport_ack_timeout": 0, 00:35:05.132 "ctrlr_loss_timeout_sec": 0, 00:35:05.132 "reconnect_delay_sec": 0, 00:35:05.132 "fast_io_fail_timeout_sec": 0, 00:35:05.133 "disable_auto_failback": false, 00:35:05.133 "generate_uuids": false, 00:35:05.133 "transport_tos": 0, 00:35:05.133 "nvme_error_stat": false, 00:35:05.133 "rdma_srq_size": 0, 00:35:05.133 "io_path_stat": false, 00:35:05.133 "allow_accel_sequence": false, 00:35:05.133 "rdma_max_cq_size": 0, 00:35:05.133 "rdma_cm_event_timeout_ms": 0, 00:35:05.133 "dhchap_digests": [ 00:35:05.133 "sha256", 00:35:05.133 "sha384", 00:35:05.133 "sha512" 00:35:05.133 ], 00:35:05.133 "dhchap_dhgroups": [ 00:35:05.133 "null", 00:35:05.133 "ffdhe2048", 00:35:05.133 "ffdhe3072", 00:35:05.133 "ffdhe4096", 00:35:05.133 "ffdhe6144", 00:35:05.133 "ffdhe8192" 00:35:05.133 ] 00:35:05.133 } 00:35:05.133 }, 00:35:05.133 { 00:35:05.133 "method": "bdev_nvme_attach_controller", 00:35:05.133 "params": { 00:35:05.133 "name": "nvme0", 00:35:05.133 "trtype": "TCP", 00:35:05.133 "adrfam": "IPv4", 00:35:05.133 "traddr": "127.0.0.1", 00:35:05.133 "trsvcid": "4420", 00:35:05.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.133 "prchk_reftag": false, 00:35:05.133 "prchk_guard": false, 00:35:05.133 "ctrlr_loss_timeout_sec": 0, 00:35:05.133 "reconnect_delay_sec": 0, 00:35:05.133 "fast_io_fail_timeout_sec": 0, 00:35:05.133 "psk": "key0", 00:35:05.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.133 "hdgst": false, 00:35:05.133 "ddgst": false, 00:35:05.133 "multipath": "multipath" 00:35:05.133 } 00:35:05.133 }, 00:35:05.133 { 00:35:05.133 "method": "bdev_nvme_set_hotplug", 00:35:05.133 "params": { 00:35:05.133 "period_us": 100000, 00:35:05.133 "enable": false 00:35:05.133 } 00:35:05.133 }, 00:35:05.133 { 00:35:05.133 "method": "bdev_wait_for_examine" 00:35:05.133 } 00:35:05.133 ] 00:35:05.133 }, 00:35:05.133 { 00:35:05.133 "subsystem": "nbd", 00:35:05.133 "config": [] 00:35:05.133 } 00:35:05.133 ] 00:35:05.133 }' 00:35:05.133 13:15:25 keyring_file -- keyring/file.sh@115 -- # killprocess 1493311 00:35:05.133 13:15:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1493311 ']' 00:35:05.133 13:15:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1493311 00:35:05.133 13:15:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:05.133 13:15:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.133 13:15:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1493311 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1493311' 00:35:05.393 killing process with pid 1493311 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@969 -- # kill 1493311 00:35:05.393 Received shutdown signal, test time was about 1.000000 seconds 00:35:05.393 00:35:05.393 Latency(us) 00:35:05.393 [2024-10-15T11:15:25.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.393 [2024-10-15T11:15:25.712Z] =================================================================================================================== 00:35:05.393 [2024-10-15T11:15:25.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@974 -- # wait 1493311 00:35:05.393 13:15:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=1494820 00:35:05.393 13:15:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1494820 /var/tmp/bperf.sock 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1494820 ']' 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.393 13:15:25 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:05.393 13:15:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:05.393 "subsystems": [ 00:35:05.393 { 00:35:05.393 "subsystem": "keyring", 00:35:05.393 "config": [ 00:35:05.393 { 00:35:05.393 "method": "keyring_file_add_key", 00:35:05.393 "params": { 00:35:05.393 "name": "key0", 00:35:05.393 "path": "/tmp/tmp.3yfQrMiBLP" 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "keyring_file_add_key", 00:35:05.393 "params": { 00:35:05.393 "name": "key1", 00:35:05.393 "path": "/tmp/tmp.eWHwBUXRdr" 00:35:05.393 } 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "iobuf", 00:35:05.393 "config": [ 00:35:05.393 { 00:35:05.393 "method": "iobuf_set_options", 00:35:05.393 "params": { 00:35:05.393 "small_pool_count": 8192, 00:35:05.393 "large_pool_count": 1024, 00:35:05.393 "small_bufsize": 8192, 00:35:05.393 "large_bufsize": 135168 00:35:05.393 } 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "sock", 00:35:05.393 "config": [ 00:35:05.393 { 00:35:05.393 "method": "sock_set_default_impl", 00:35:05.393 "params": { 00:35:05.393 "impl_name": "posix" 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "sock_impl_set_options", 00:35:05.393 "params": { 00:35:05.393 "impl_name": "ssl", 00:35:05.393 "recv_buf_size": 4096, 00:35:05.393 "send_buf_size": 4096, 00:35:05.393 "enable_recv_pipe": true, 00:35:05.393 "enable_quickack": false, 00:35:05.393 "enable_placement_id": 0, 00:35:05.393 "enable_zerocopy_send_server": true, 00:35:05.393 "enable_zerocopy_send_client": false, 00:35:05.393 "zerocopy_threshold": 0, 00:35:05.393 "tls_version": 0, 00:35:05.393 "enable_ktls": false 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "sock_impl_set_options", 00:35:05.393 "params": { 00:35:05.393 "impl_name": "posix", 00:35:05.393 "recv_buf_size": 2097152, 00:35:05.393 "send_buf_size": 2097152, 00:35:05.393 "enable_recv_pipe": true, 00:35:05.393 "enable_quickack": false, 00:35:05.393 "enable_placement_id": 0, 00:35:05.393 "enable_zerocopy_send_server": true, 00:35:05.393 "enable_zerocopy_send_client": false, 00:35:05.393 "zerocopy_threshold": 0, 00:35:05.393 "tls_version": 0, 00:35:05.393 "enable_ktls": false 00:35:05.393 } 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "vmd", 00:35:05.393 "config": [] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "accel", 00:35:05.393 "config": [ 00:35:05.393 { 00:35:05.393 "method": "accel_set_options", 00:35:05.393 "params": { 00:35:05.393 "small_cache_size": 128, 00:35:05.393 "large_cache_size": 16, 00:35:05.393 "task_count": 2048, 00:35:05.393 "sequence_count": 2048, 00:35:05.393 "buf_count": 2048 00:35:05.393 } 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "bdev", 00:35:05.393 "config": [ 00:35:05.393 { 00:35:05.393 "method": "bdev_set_options", 00:35:05.393 "params": { 00:35:05.393 "bdev_io_pool_size": 65535, 00:35:05.393 "bdev_io_cache_size": 256, 00:35:05.393 "bdev_auto_examine": true, 00:35:05.393 "iobuf_small_cache_size": 128, 00:35:05.393 "iobuf_large_cache_size": 16 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_raid_set_options", 00:35:05.393 "params": { 00:35:05.393 "process_window_size_kb": 1024, 00:35:05.393 "process_max_bandwidth_mb_sec": 0 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_iscsi_set_options", 00:35:05.393 "params": { 00:35:05.393 "timeout_sec": 30 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_nvme_set_options", 00:35:05.393 "params": { 00:35:05.393 "action_on_timeout": "none", 00:35:05.393 "timeout_us": 0, 00:35:05.393 "timeout_admin_us": 0, 00:35:05.393 "keep_alive_timeout_ms": 10000, 00:35:05.393 "arbitration_burst": 0, 00:35:05.393 "low_priority_weight": 0, 00:35:05.393 "medium_priority_weight": 0, 00:35:05.393 "high_priority_weight": 0, 00:35:05.393 "nvme_adminq_poll_period_us": 10000, 00:35:05.393 "nvme_ioq_poll_period_us": 0, 00:35:05.393 "io_queue_requests": 512, 00:35:05.393 "delay_cmd_submit": true, 00:35:05.393 "transport_retry_count": 4, 00:35:05.393 "bdev_retry_count": 3, 00:35:05.393 "transport_ack_timeout": 0, 00:35:05.393 "ctrlr_loss_timeout_sec": 0, 00:35:05.393 "reconnect_delay_sec": 0, 00:35:05.393 "fast_io_fail_timeout_sec": 0, 00:35:05.393 "disable_auto_failback": false, 00:35:05.393 "generate_uuids": false, 00:35:05.393 "transport_tos": 0, 00:35:05.393 "nvme_error_stat": false, 00:35:05.393 "rdma_srq_size": 0, 00:35:05.393 "io_path_stat": false, 00:35:05.393 "allow_accel_sequence": false, 00:35:05.393 "rdma_max_cq_size": 0, 00:35:05.393 "rdma_cm_event_timeout_ms": 0, 00:35:05.393 "dhchap_digests": [ 00:35:05.393 "sha256", 00:35:05.393 "sha384", 00:35:05.393 "sha512" 00:35:05.393 ], 00:35:05.393 "dhchap_dhgroups": [ 00:35:05.393 "null", 00:35:05.393 "ffdhe2048", 00:35:05.393 "ffdhe3072", 00:35:05.393 "ffdhe4096", 00:35:05.393 "ffdhe6144", 00:35:05.393 "ffdhe8192" 00:35:05.393 ] 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_nvme_attach_controller", 00:35:05.393 "params": { 00:35:05.393 "name": "nvme0", 00:35:05.393 "trtype": "TCP", 00:35:05.393 "adrfam": "IPv4", 00:35:05.393 "traddr": "127.0.0.1", 00:35:05.393 "trsvcid": "4420", 00:35:05.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.393 "prchk_reftag": false, 00:35:05.393 "prchk_guard": false, 00:35:05.393 "ctrlr_loss_timeout_sec": 0, 00:35:05.393 "reconnect_delay_sec": 0, 00:35:05.393 "fast_io_fail_timeout_sec": 0, 00:35:05.393 "psk": "key0", 00:35:05.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.393 "hdgst": false, 00:35:05.393 "ddgst": false, 00:35:05.393 "multipath": "multipath" 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_nvme_set_hotplug", 00:35:05.393 "params": { 00:35:05.393 "period_us": 100000, 00:35:05.393 "enable": false 00:35:05.393 } 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "method": "bdev_wait_for_examine" 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }, 00:35:05.393 { 00:35:05.393 "subsystem": "nbd", 00:35:05.393 "config": [] 00:35:05.393 } 00:35:05.393 ] 00:35:05.393 }' 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:05.393 13:15:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.394 13:15:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:05.394 13:15:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.394 [2024-10-15 13:15:25.682234] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:35:05.394 [2024-10-15 13:15:25.682285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494820 ] 00:35:05.652 [2024-10-15 13:15:25.750498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.652 [2024-10-15 13:15:25.788266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.652 [2024-10-15 13:15:25.948807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:06.217 13:15:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:06.217 13:15:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:06.217 13:15:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:06.217 13:15:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:06.217 13:15:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.475 13:15:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:06.475 13:15:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:06.475 13:15:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.475 13:15:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.475 13:15:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.475 13:15:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.476 13:15:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.734 13:15:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:06.734 13:15:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:06.734 13:15:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.734 13:15:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.734 13:15:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.734 13:15:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.734 13:15:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:06.992 13:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3yfQrMiBLP /tmp/tmp.eWHwBUXRdr 00:35:06.992 13:15:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1494820 00:35:06.992 13:15:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1494820 ']' 00:35:06.992 13:15:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1494820 00:35:06.992 13:15:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:06.992 13:15:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.992 13:15:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494820 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494820' 00:35:07.250 killing process with pid 1494820 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@969 -- # kill 1494820 00:35:07.250 Received shutdown signal, test time was about 1.000000 seconds 00:35:07.250 00:35:07.250 Latency(us) 00:35:07.250 [2024-10-15T11:15:27.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.250 [2024-10-15T11:15:27.569Z] =================================================================================================================== 00:35:07.250 [2024-10-15T11:15:27.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@974 -- # wait 1494820 00:35:07.250 13:15:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1493294 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1493294 ']' 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1493294 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1493294 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1493294' 00:35:07.250 killing process with pid 1493294 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@969 -- # kill 1493294 00:35:07.250 13:15:27 keyring_file -- common/autotest_common.sh@974 -- # wait 1493294 00:35:07.828 00:35:07.828 real 0m11.637s 00:35:07.828 user 0m28.837s 00:35:07.828 sys 0m2.669s 00:35:07.828 13:15:27 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:07.828 13:15:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:07.828 ************************************ 00:35:07.828 END TEST keyring_file 00:35:07.828 ************************************ 00:35:07.828 13:15:27 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:35:07.829 13:15:27 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:07.829 13:15:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:07.829 13:15:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:07.829 13:15:27 -- common/autotest_common.sh@10 -- # set +x 00:35:07.829 ************************************ 00:35:07.829 START TEST keyring_linux 00:35:07.829 ************************************ 00:35:07.829 13:15:27 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:07.829 Joined session keyring: 552673150 00:35:07.829 * Looking for test storage... 00:35:07.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.829 13:15:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.829 --rc genhtml_branch_coverage=1 00:35:07.829 --rc genhtml_function_coverage=1 00:35:07.829 --rc genhtml_legend=1 00:35:07.829 --rc geninfo_all_blocks=1 00:35:07.829 --rc geninfo_unexecuted_blocks=1 00:35:07.829 00:35:07.829 ' 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.829 --rc genhtml_branch_coverage=1 00:35:07.829 --rc genhtml_function_coverage=1 00:35:07.829 --rc genhtml_legend=1 00:35:07.829 --rc geninfo_all_blocks=1 00:35:07.829 --rc geninfo_unexecuted_blocks=1 00:35:07.829 00:35:07.829 ' 00:35:07.829 13:15:28 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.829 --rc genhtml_branch_coverage=1 00:35:07.829 --rc genhtml_function_coverage=1 00:35:07.829 --rc genhtml_legend=1 00:35:07.829 --rc geninfo_all_blocks=1 00:35:07.830 --rc geninfo_unexecuted_blocks=1 00:35:07.830 00:35:07.830 ' 00:35:07.830 13:15:28 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.830 --rc genhtml_branch_coverage=1 00:35:07.830 --rc genhtml_function_coverage=1 00:35:07.830 --rc genhtml_legend=1 00:35:07.830 --rc geninfo_all_blocks=1 00:35:07.830 --rc geninfo_unexecuted_blocks=1 00:35:07.830 00:35:07.830 ' 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:07.830 13:15:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.830 13:15:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.830 13:15:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.830 13:15:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.830 13:15:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.830 13:15:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.830 13:15:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.830 13:15:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.830 13:15:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:07.830 13:15:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:07.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.830 13:15:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.830 13:15:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:07.830 13:15:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:07.830 13:15:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:07.831 13:15:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:07.831 13:15:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:07.831 13:15:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:07.831 13:15:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:07.831 13:15:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:07.831 13:15:28 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:08.091 /tmp/:spdk-test:key0 00:35:08.091 13:15:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:08.091 13:15:28 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:08.091 13:15:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:08.091 /tmp/:spdk-test:key1 00:35:08.091 13:15:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1495375 00:35:08.091 13:15:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1495375 00:35:08.091 13:15:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1495375 ']' 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.091 13:15:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.091 [2024-10-15 13:15:28.279727] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:35:08.091 [2024-10-15 13:15:28.279774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495375 ] 00:35:08.091 [2024-10-15 13:15:28.347514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.091 [2024-10-15 13:15:28.389320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.350 13:15:28 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.350 13:15:28 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:08.350 13:15:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:08.350 13:15:28 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.350 13:15:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.351 [2024-10-15 13:15:28.597852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.351 null0 00:35:08.351 [2024-10-15 13:15:28.629919] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:08.351 [2024-10-15 13:15:28.630262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.351 13:15:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:08.351 454817680 00:35:08.351 13:15:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:08.351 165125866 00:35:08.351 13:15:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1495384 00:35:08.351 13:15:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1495384 /var/tmp/bperf.sock 00:35:08.351 13:15:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1495384 ']' 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.351 13:15:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.609 [2024-10-15 13:15:28.699572] Starting SPDK v25.01-pre git sha1 96764f31c / DPDK 24.03.0 initialization... 00:35:08.609 [2024-10-15 13:15:28.699620] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495384 ] 00:35:08.609 [2024-10-15 13:15:28.767418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.609 [2024-10-15 13:15:28.809393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.609 13:15:28 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.609 13:15:28 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:08.609 13:15:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:08.609 13:15:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:08.868 13:15:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:08.868 13:15:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:09.126 13:15:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:09.126 13:15:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:09.126 [2024-10-15 13:15:29.436503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:09.384 nvme0n1 00:35:09.384 13:15:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:09.384 13:15:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:09.384 13:15:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:09.384 13:15:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:09.384 13:15:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.384 13:15:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:09.642 13:15:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.642 13:15:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.642 13:15:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@25 -- # sn=454817680 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 454817680 == \4\5\4\8\1\7\6\8\0 ]] 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 454817680 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:09.642 13:15:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.900 Running I/O for 1 seconds... 00:35:10.834 21695.00 IOPS, 84.75 MiB/s 00:35:10.834 Latency(us) 00:35:10.834 [2024-10-15T11:15:31.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.834 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:10.834 nvme0n1 : 1.01 21691.96 84.73 0.00 0.00 5881.28 1934.87 7302.58 00:35:10.834 [2024-10-15T11:15:31.153Z] =================================================================================================================== 00:35:10.834 [2024-10-15T11:15:31.153Z] Total : 21691.96 84.73 0.00 0.00 5881.28 1934.87 7302.58 00:35:10.834 { 00:35:10.834 "results": [ 00:35:10.834 { 00:35:10.834 "job": "nvme0n1", 00:35:10.834 "core_mask": "0x2", 00:35:10.834 "workload": "randread", 00:35:10.834 "status": "finished", 00:35:10.834 "queue_depth": 128, 00:35:10.834 "io_size": 4096, 00:35:10.834 "runtime": 1.006041, 00:35:10.834 "iops": 21691.958876427503, 00:35:10.834 "mibps": 84.73421436104493, 00:35:10.834 "io_failed": 0, 00:35:10.834 "io_timeout": 0, 00:35:10.834 "avg_latency_us": 5881.284093889583, 00:35:10.834 "min_latency_us": 1934.872380952381, 00:35:10.834 "max_latency_us": 7302.582857142857 00:35:10.834 } 00:35:10.834 ], 00:35:10.834 "core_count": 1 00:35:10.834 } 00:35:10.834 13:15:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:10.834 13:15:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:11.091 13:15:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:11.091 13:15:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:11.091 13:15:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:11.091 13:15:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:11.091 13:15:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:11.091 13:15:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.349 13:15:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.349 [2024-10-15 13:15:31.598447] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:11.349 [2024-10-15 13:15:31.599034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1062010 (107): Transport endpoint is not connected 00:35:11.349 [2024-10-15 13:15:31.600028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1062010 (9): Bad file descriptor 00:35:11.349 [2024-10-15 13:15:31.601029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:11.349 [2024-10-15 13:15:31.601039] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:11.349 [2024-10-15 13:15:31.601047] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:11.349 [2024-10-15 13:15:31.601055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:11.349 request: 00:35:11.349 { 00:35:11.349 "name": "nvme0", 00:35:11.349 "trtype": "tcp", 00:35:11.349 "traddr": "127.0.0.1", 00:35:11.349 "adrfam": "ipv4", 00:35:11.349 "trsvcid": "4420", 00:35:11.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.349 "prchk_reftag": false, 00:35:11.349 "prchk_guard": false, 00:35:11.349 "hdgst": false, 00:35:11.349 "ddgst": false, 00:35:11.349 "psk": ":spdk-test:key1", 00:35:11.349 "allow_unrecognized_csi": false, 00:35:11.349 "method": "bdev_nvme_attach_controller", 00:35:11.349 "req_id": 1 00:35:11.349 } 00:35:11.349 Got JSON-RPC error response 00:35:11.349 response: 00:35:11.349 { 00:35:11.349 "code": -5, 00:35:11.349 "message": "Input/output error" 00:35:11.349 } 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@33 -- # sn=454817680 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 454817680 00:35:11.349 1 links removed 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@33 -- # sn=165125866 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 165125866 00:35:11.349 1 links removed 00:35:11.349 13:15:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1495384 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1495384 ']' 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1495384 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.349 13:15:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495384 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495384' 00:35:11.608 killing process with pid 1495384 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1495384 00:35:11.608 Received shutdown signal, test time was about 1.000000 seconds 00:35:11.608 00:35:11.608 Latency(us) 00:35:11.608 [2024-10-15T11:15:31.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.608 [2024-10-15T11:15:31.927Z] =================================================================================================================== 00:35:11.608 [2024-10-15T11:15:31.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1495384 00:35:11.608 13:15:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1495375 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1495375 ']' 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1495375 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495375 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495375' 00:35:11.608 killing process with pid 1495375 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1495375 00:35:11.608 13:15:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1495375 00:35:12.174 00:35:12.174 real 0m4.256s 00:35:12.174 user 0m7.997s 00:35:12.174 sys 0m1.414s 00:35:12.174 13:15:32 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:12.174 13:15:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:12.174 ************************************ 00:35:12.174 END TEST keyring_linux 00:35:12.174 ************************************ 00:35:12.174 13:15:32 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:12.174 13:15:32 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:12.174 13:15:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:12.174 13:15:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:12.174 13:15:32 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:12.174 13:15:32 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:12.174 13:15:32 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:12.174 13:15:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.174 13:15:32 -- common/autotest_common.sh@10 -- # set +x 00:35:12.174 13:15:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:12.174 13:15:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:12.174 13:15:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:12.174 13:15:32 -- common/autotest_common.sh@10 -- # set +x 00:35:17.443 INFO: APP EXITING 00:35:17.443 INFO: killing all VMs 00:35:17.443 INFO: killing vhost app 00:35:17.443 INFO: EXIT DONE 00:35:19.973 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:19.973 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:19.973 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:23.264 Cleaning 00:35:23.264 Removing: /var/run/dpdk/spdk0/config 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:23.264 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:23.264 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:23.264 Removing: /var/run/dpdk/spdk1/config 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:23.264 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:23.264 Removing: /var/run/dpdk/spdk2/config 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:23.264 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:23.264 Removing: /var/run/dpdk/spdk3/config 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:23.264 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:23.264 Removing: /var/run/dpdk/spdk4/config 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:23.264 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:23.264 Removing: /dev/shm/bdev_svc_trace.1 00:35:23.264 Removing: /dev/shm/nvmf_trace.0 00:35:23.264 Removing: /dev/shm/spdk_tgt_trace.pid1021930 00:35:23.264 Removing: /var/run/dpdk/spdk0 00:35:23.264 Removing: /var/run/dpdk/spdk1 00:35:23.264 Removing: /var/run/dpdk/spdk2 00:35:23.264 Removing: /var/run/dpdk/spdk3 00:35:23.264 Removing: /var/run/dpdk/spdk4 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1019569 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1020634 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1021930 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1022476 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1023399 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1023545 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1024534 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1024634 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1024898 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1026632 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1027907 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1028202 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1028492 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1028797 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1029089 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1029345 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1029591 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1029873 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1030616 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1034132 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1034397 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1034654 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1034657 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035151 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035157 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035649 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035659 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035915 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1035924 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1036187 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1036237 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1036756 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1037011 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1037305 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1041008 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1045447 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1055747 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1056230 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1060525 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1060973 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1065245 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1071130 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1073738 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1084447 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1093526 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1095236 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1096163 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1113053 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1117121 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1162496 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1167802 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1173555 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1179572 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1179574 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1180608 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1181623 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1182619 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1183311 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1183313 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1183554 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1183618 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1183772 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1184532 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1185400 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1186315 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1186791 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1186962 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1187234 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1188254 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1189232 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1197457 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1226212 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1230715 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1232335 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1234154 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1234385 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1234436 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1234639 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1235140 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1236978 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1237740 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1238236 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1240342 00:35:23.265 Removing: /var/run/dpdk/spdk_pid1240832 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1241470 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1245624 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1251012 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1251013 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1251014 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1255003 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1263724 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1267677 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1273710 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1275148 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1276560 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1277892 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1282612 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1286630 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1294010 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1294103 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1298725 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1298955 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1299185 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1299543 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1299647 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1304132 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1304702 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1309562 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1312309 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1317655 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1322966 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1331626 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1338830 00:35:23.525 Removing: /var/run/dpdk/spdk_pid1338832 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1358142 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1358615 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1359236 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1359773 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1360512 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1360987 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1361458 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1362149 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1366191 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1366418 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1372485 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1372759 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1378109 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1382240 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1392178 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1392662 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1396908 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1397158 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1401916 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1407558 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1410139 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1420292 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1428973 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1430590 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1431496 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1447736 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1451947 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1454637 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1462596 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1462603 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1467769 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1469596 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1471561 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1472614 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1474651 00:35:23.526 Removing: /var/run/dpdk/spdk_pid1475856 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1484612 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1485076 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1485537 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1488021 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1488487 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1489079 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1493294 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1493311 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1494820 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1495375 00:35:23.785 Removing: /var/run/dpdk/spdk_pid1495384 00:35:23.785 Clean 00:35:23.785 13:15:43 -- common/autotest_common.sh@1451 -- # return 0 00:35:23.785 13:15:43 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:35:23.785 13:15:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.785 13:15:43 -- common/autotest_common.sh@10 -- # set +x 00:35:23.785 13:15:43 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:35:23.785 13:15:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.785 13:15:43 -- common/autotest_common.sh@10 -- # set +x 00:35:23.785 13:15:44 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:23.785 13:15:44 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:23.785 13:15:44 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:23.785 13:15:44 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:35:23.785 13:15:44 -- spdk/autotest.sh@394 -- # hostname 00:35:23.785 13:15:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:24.044 geninfo: WARNING: invalid characters removed from testname! 00:35:46.075 13:16:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:47.011 13:16:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:48.916 13:16:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:50.821 13:16:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:52.727 13:16:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.632 13:16:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.543 13:16:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:56.543 13:16:16 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:35:56.543 13:16:16 -- common/autotest_common.sh@1691 -- $ lcov --version 00:35:56.543 13:16:16 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:35:56.543 13:16:16 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:35:56.543 13:16:16 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:35:56.543 13:16:16 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:35:56.543 13:16:16 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:35:56.543 13:16:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:35:56.543 13:16:16 -- scripts/common.sh@336 -- $ read -ra ver1 00:35:56.543 13:16:16 -- scripts/common.sh@337 -- $ IFS=.-: 00:35:56.543 13:16:16 -- scripts/common.sh@337 -- $ read -ra ver2 00:35:56.543 13:16:16 -- scripts/common.sh@338 -- $ local 'op=<' 00:35:56.543 13:16:16 -- scripts/common.sh@340 -- $ ver1_l=2 00:35:56.543 13:16:16 -- scripts/common.sh@341 -- $ ver2_l=1 00:35:56.543 13:16:16 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:35:56.543 13:16:16 -- scripts/common.sh@344 -- $ case "$op" in 00:35:56.543 13:16:16 -- scripts/common.sh@345 -- $ : 1 00:35:56.543 13:16:16 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:35:56.543 13:16:16 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.543 13:16:16 -- scripts/common.sh@365 -- $ decimal 1 00:35:56.543 13:16:16 -- scripts/common.sh@353 -- $ local d=1 00:35:56.543 13:16:16 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:35:56.543 13:16:16 -- scripts/common.sh@355 -- $ echo 1 00:35:56.543 13:16:16 -- scripts/common.sh@365 -- $ ver1[v]=1 00:35:56.543 13:16:16 -- scripts/common.sh@366 -- $ decimal 2 00:35:56.543 13:16:16 -- scripts/common.sh@353 -- $ local d=2 00:35:56.543 13:16:16 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:35:56.543 13:16:16 -- scripts/common.sh@355 -- $ echo 2 00:35:56.543 13:16:16 -- scripts/common.sh@366 -- $ ver2[v]=2 00:35:56.543 13:16:16 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:35:56.543 13:16:16 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:35:56.543 13:16:16 -- scripts/common.sh@368 -- $ return 0 00:35:56.543 13:16:16 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.543 13:16:16 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:35:56.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.543 --rc genhtml_branch_coverage=1 00:35:56.543 --rc genhtml_function_coverage=1 00:35:56.543 --rc genhtml_legend=1 00:35:56.543 --rc geninfo_all_blocks=1 00:35:56.543 --rc geninfo_unexecuted_blocks=1 00:35:56.543 00:35:56.543 ' 00:35:56.543 13:16:16 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:35:56.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.543 --rc genhtml_branch_coverage=1 00:35:56.543 --rc genhtml_function_coverage=1 00:35:56.543 --rc genhtml_legend=1 00:35:56.543 --rc geninfo_all_blocks=1 00:35:56.543 --rc geninfo_unexecuted_blocks=1 00:35:56.543 00:35:56.543 ' 00:35:56.543 13:16:16 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:35:56.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.544 --rc genhtml_branch_coverage=1 00:35:56.544 --rc genhtml_function_coverage=1 00:35:56.544 --rc genhtml_legend=1 00:35:56.544 --rc geninfo_all_blocks=1 00:35:56.544 --rc geninfo_unexecuted_blocks=1 00:35:56.544 00:35:56.544 ' 00:35:56.544 13:16:16 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:35:56.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.544 --rc genhtml_branch_coverage=1 00:35:56.544 --rc genhtml_function_coverage=1 00:35:56.544 --rc genhtml_legend=1 00:35:56.544 --rc geninfo_all_blocks=1 00:35:56.544 --rc geninfo_unexecuted_blocks=1 00:35:56.544 00:35:56.544 ' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.544 13:16:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:35:56.544 13:16:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:56.544 13:16:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.544 13:16:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.544 13:16:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.544 13:16:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.544 13:16:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.544 13:16:16 -- paths/export.sh@5 -- $ export PATH 00:35:56.544 13:16:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.544 13:16:16 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:56.544 13:16:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:35:56.544 13:16:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728990976.XXXXXX 00:35:56.544 13:16:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728990976.zippp7 00:35:56.544 13:16:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:35:56.544 13:16:16 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:35:56.544 13:16:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:35:56.544 13:16:16 -- common/autotest_common.sh@10 -- $ set +x 00:35:56.544 13:16:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:56.544 13:16:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:35:56.544 13:16:16 -- pm/common@17 -- $ local monitor 00:35:56.544 13:16:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:56.544 13:16:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:56.544 13:16:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:56.544 13:16:16 -- pm/common@21 -- $ date +%s 00:35:56.544 13:16:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:56.544 13:16:16 -- pm/common@21 -- $ date +%s 00:35:56.544 13:16:16 -- pm/common@25 -- $ sleep 1 00:35:56.544 13:16:16 -- pm/common@21 -- $ date +%s 00:35:56.544 13:16:16 -- pm/common@21 -- $ date +%s 00:35:56.544 13:16:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728990976 00:35:56.544 13:16:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728990976 00:35:56.544 13:16:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728990976 00:35:56.544 13:16:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728990976 00:35:56.544 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728990976_collect-cpu-load.pm.log 00:35:56.544 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728990976_collect-vmstat.pm.log 00:35:56.544 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728990976_collect-cpu-temp.pm.log 00:35:56.544 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728990976_collect-bmc-pm.bmc.pm.log 00:35:57.481 13:16:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:35:57.481 13:16:17 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:35:57.481 13:16:17 -- spdk/autopackage.sh@14 -- $ timing_finish 00:35:57.481 13:16:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:57.481 13:16:17 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:57.481 13:16:17 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:57.481 13:16:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:57.481 13:16:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:57.481 13:16:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:57.481 13:16:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:57.481 13:16:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:57.481 13:16:17 -- pm/common@44 -- $ pid=1506019 00:35:57.481 13:16:17 -- pm/common@50 -- $ kill -TERM 1506019 00:35:57.481 13:16:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:57.481 13:16:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:57.481 13:16:17 -- pm/common@44 -- $ pid=1506021 00:35:57.481 13:16:17 -- pm/common@50 -- $ kill -TERM 1506021 00:35:57.481 13:16:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:57.481 13:16:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:57.481 13:16:17 -- pm/common@44 -- $ pid=1506022 00:35:57.481 13:16:17 -- pm/common@50 -- $ kill -TERM 1506022 00:35:57.481 13:16:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:57.481 13:16:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:57.481 13:16:17 -- pm/common@44 -- $ pid=1506048 00:35:57.481 13:16:17 -- pm/common@50 -- $ sudo -E kill -TERM 1506048 00:35:57.481 + [[ -n 942540 ]] 00:35:57.481 + sudo kill 942540 00:35:57.490 [Pipeline] } 00:35:57.507 [Pipeline] // stage 00:35:57.512 [Pipeline] } 00:35:57.526 [Pipeline] // timeout 00:35:57.532 [Pipeline] } 00:35:57.548 [Pipeline] // catchError 00:35:57.554 [Pipeline] } 00:35:57.569 [Pipeline] // wrap 00:35:57.575 [Pipeline] } 00:35:57.589 [Pipeline] // catchError 00:35:57.598 [Pipeline] stage 00:35:57.601 [Pipeline] { (Epilogue) 00:35:57.614 [Pipeline] catchError 00:35:57.616 [Pipeline] { 00:35:57.629 [Pipeline] echo 00:35:57.631 Cleanup processes 00:35:57.637 [Pipeline] sh 00:35:57.921 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:57.921 1506178 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:57.921 1506518 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:57.934 [Pipeline] sh 00:35:58.218 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.218 ++ grep -v 'sudo pgrep' 00:35:58.218 ++ awk '{print $1}' 00:35:58.218 + sudo kill -9 1506178 00:35:58.230 [Pipeline] sh 00:35:58.513 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:10.735 [Pipeline] sh 00:36:11.020 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:11.020 Artifacts sizes are good 00:36:11.036 [Pipeline] archiveArtifacts 00:36:11.044 Archiving artifacts 00:36:11.167 [Pipeline] sh 00:36:11.452 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:11.467 [Pipeline] cleanWs 00:36:11.478 [WS-CLEANUP] Deleting project workspace... 00:36:11.478 [WS-CLEANUP] Deferred wipeout is used... 00:36:11.484 [WS-CLEANUP] done 00:36:11.486 [Pipeline] } 00:36:11.504 [Pipeline] // catchError 00:36:11.516 [Pipeline] sh 00:36:11.840 + logger -p user.info -t JENKINS-CI 00:36:11.849 [Pipeline] } 00:36:11.862 [Pipeline] // stage 00:36:11.868 [Pipeline] } 00:36:11.882 [Pipeline] // node 00:36:11.888 [Pipeline] End of Pipeline 00:36:11.932 Finished: SUCCESS